VERITAS Storage Foundation 4.1 for Oracle RAC

Size: px
Start display at page:

Download "VERITAS Storage Foundation 4.1 for Oracle RAC"

Transcription

1 VERITAS Storage Foundation 4.1 for Oracle RAC Installation and Configuration Guide Linux N16350H August 2005

2 Disclaimer The information contained in this publication is subject to change without notice. VERITAS Software Corporation makes no warranty of any kind with regard to this manual, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose. VERITAS Software Corporation shall not be liable for errors contained herein or for incidental or consequential damages in connection with the furnishing, performance, or use of this manual. VERITAS Legal Notice Copyright VERITAS Software Corporation. All rights reserved. VERITAS and the VERITAS Logo are trademarks or registered trademarks of VERITAS Software Corporation or its affiliates in the U.S. and other countries. Other names may be trademarks of their respective owners. VERITAS Software Corporation 350 Ellis Street Mountain View, CA USA Phone Fax Third-Party Legal Notices Certain third-party software may be distributed, embedded, or bundled with this VERITAS product, or recommended for use in conjunction with VERITAS product installation and operation. Such third-party software is separately licensed by its copyright holder. See the Third-Party Legal Notice appendix in this product s Release Notes for the licenses that govern the use of the third-party software and for proprietary notices of the copyright holders.

3 Contents Preface How this Guide Is Organized Documentation Conventions Getting Help Documentation Feedback xi xi xiv xvi xvii xvii Using Storage Foundation for Oracle RAC 1. Overview: Storage Foundation for Oracle RAC 3 What is RAC? 3 Storage Foundation for Oracle RAC Overview 6 Storage Foundation for Oracle RAC Communications 8 Group Membership Services/Atomic Broadcast (GAB) 9 Cluster Volume Manager (CVM) 10 Cluster File System 11 Oracle Disk Manager 11 VERITAS Cluster Server 12 Storage Foundation for Oracle RAC OSD Layer Support 14 About I/O Fencing Preparing for Installation of Storage Foundation for Oracle RAC 19 Installation Tasks 20 Reviewing the Requirements 22 v

4 Reviewing the Configuration 24 Obtaining License Keys 27 Preinstallation Configuration 28 Information Required for the installsfrac Script Installing and Configuring Storage Foundation for Oracle RAC 33 Installing Storage Foundation for Oracle RAC Verifying GAB Port Membership 47 Setting Up Shared Storage and I/O Fencing for SFRAC 49 Configuring CVM and CFS Manually (Optional) 62 Example VCS Configuration File After Installation Using I/O Fencing 67 About I/O Fencing of Shared Storage 67 Verifying Data Storage Arrays Using the vxfentsthdw Utility 68 How I/O Fencing Works in Different Event Scenarios Troubleshooting Storage Foundation for Oracle RAC 75 Running Scripts for Engineering Support Analysis 75 Troubleshooting Tips 76 Troubleshooting Oracle 76 Troubleshooting Fencing 82 Troubleshooting ODM 88 Troubleshooting VCSIPC 89 Troubleshooting CVM 90 Troubleshooting Interconnects 92 Using Oracle 10g on Red Hat 6. Installing Oracle 10g Software on Red Hat 95 Configuring Oracle 10g Release 1 Prerequisites 95 Installing Oracle 10g CRS and Database 101 vi VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

5 7. Upgrading and Migrating Oracle Software on Red Hat 119 Migrating from Oracle9i to Oracle 10g 119 Applying Oracle 10g Patchsets on Red Hat Configuring VCS Service Groups for Oracle 10g on Red Hat 137 Creating an Oracle Service Group 137 Configuring CVM Service Group for Oracle 10g Manually 143 Location of VCS Log Files Adding and Removing Cluster Nodes for Oracle 10g on Red Hat Linux 149 Adding a Node to an Oracle 10g Cluster 149 Removing a Node from an Oracle 10g Cluster Uninstalling Storage Foundation for Oracle RAC on Oracle 10g Systems 173 Offlining Service Groups 173 Unlinking Oracle 10g Binaries from VERITAS Libraries 174 Removing Storage Foundation for Oracle RAC Packages 175 Removing Other VERITAS Packages 178 Removing License Files 178 Removing Other Configuration Files 178 Using Oracle9i on Red Hat 11. Installing Oracle9i Software on Red Hat 181 Oracle Installation Checklist 181 Configuring Prerequisites for Oracle9i 182 Installing Oracle9i Release 2 Software 185 Applying Oracle9i Patchsets on Red Hat Configuring VCS Service Groups for Oracle 9i on Red Hat 203 Storage Foundation for Oracle RAC 4.1 Service Groups: An Overview 203 Checking cluster_database Flag in Oracle9i Parameter File 204 Contents vii

6 Configuring CVM and Oracle Groups: Overview 204 Configuring CVM and Oracle Service Groups Manually 206 Location of VCS and Oracle Agent Log Files 213 Configuring the Service Groups Using the Wizard Adding and Removing Cluster Nodes Using Oracle9i on Red Hat 231 Adding a Node to an Oracle9i Red Hat Cluster 231 Removing a Node for Oracle9i on Red Hat Uninstalling Storage Foundation for Oracle RAC on Systems Using Oracle9i 253 Stopping gsd and Applications Using Oracle9i on CFS 255 Offlining the Oracle and Netlsnr Resources 256 Removing the Oracle Database (optional) 256 Uninstalling Oracle9i (optional) 256 Unlinking Oracle9i Binaries from VERITAS Libraries 257 Removing Storage Foundation for Oracle RAC Packages 258 Removing Other VERITAS Packages 260 Removing License Files 261 Removing Other Configuration Files 261 Using Oracle 10g on SLES9 15. Installing Oracle 10g Software on SLES9 265 Configuring Oracle 10g Release 1 Prerequisites 265 Applying Oracle 10g Patchsets on SLES Configuring VCS Service Groups for Oracle 10g on SLES9 299 Creating an Oracle Service Group 299 Configuring CVM Service Group for Oracle 10g Manually 305 Location of VCS Log Files Adding and Removing Cluster Nodes for Oracle 10g on SLES9 311 viii VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

7 Adding a Node to an Oracle 10g Cluster 311 Removing a Node from an Oracle 10g Cluster Uninstalling Storage Foundation for Oracle RAC on Oracle 10g Systems 335 Offlining Service Groups 335 Unlinking Oracle 10g Binaries from VERITAS Libraries 336 Removing Storage Foundation for Oracle RAC Packages 337 Removing Other VERITAS Packages 340 Removing License Files 340 Removing Other Configuration Files 340 Appendixes A. Sample Configuration Files 343 Sample main.cf Using Oracle 10g 343 Sample main.cf Using Oracle9i 345 B. Creating a Starter Database 351 Overview 351 Database Tablespace Requirements 352 Creating Tablespaces on CFS for Oracle 10g 353 Creating Tablespaces on Shared Raw Volumes for Oracle 10g 355 Creating the Starter Database for Oracle 10g 356 Creating Tablespaces on CFS for Oracle9i 357 Creating Tablespaces on Shared Raw Volumes for Oracle9i 359 Creating the Starter Database for Oracle9i 360 C. Using Agents 361 Configuring the CVMCluster Agent 362 Configuring the CVMVolDg Resources 364 Configuring CFSMount Resources 366 Configuring the PrivNIC Agent 369 Configuring the CSSD Agent 373 Contents ix

8 D. Tunable Kernel Driver Parameters 375 LMX Tunable Parameters 375 VCSMM Tunable Parameters 377 VXFEN Tunable Parameter 378 E. Error Messages 379 LMX Error Messages, Critical 379 LMX Error Messages, Non-Critical 380 VxVM Errors Related to I/O Fencing 382 VXFEN Driver Error Messages 382 VXFEN Driver Informational Message 383 Informational Messages When Node is Ejected 383 Glossary 385 Index 391 x VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

9 Preface VERITAS Storage Foundation 4.1 for Oracle RAC software is an integrated set of software products. It enables administrators of Oracle Real Application Clusters (RAC) to operate a database in an environment of cluster systems running VERITAS Cluster Server (VCS) and the cluster features of VERITAS Volume Manager TM and VERITAS File System TM, also known as CVM and CFS, respectively. This guide provides information for installing VERITAS Storage Foundation 4.1 for Oracle RAC on the Linux operating system. It includes information on configuring and maintaining Oracle Real Application Cluster running on a VCS cluster with VxVM disk management. How this Guide Is Organized Section I: Using Storage Foundation for Oracle RAC Chapter 1. Overview: Storage Foundation for Oracle RAC on page 3 describes the components of Oracle RAC and the Storage Foundation for Oracle RAC components in support of Oracle Real Application Cluster. Chapter 2. Preparing for Installation of Storage Foundation for Oracle RAC on page 19 provides a visual summary of the installation procedures required for installing Storage Foundation for Oracle RAC. Chapter 3. Installing and Configuring Storage Foundation for Oracle RAC on page 33 describes the sequence of steps to install the required software and how to perform initial configuration. The procedures for testing the storage arrays for compliance with SCSI-3 persistent reservations are described. Chapter 4. Using I/O Fencing on page 67 describes I/O fencing behavior in several scenarios. Chapter 5. Troubleshooting Storage Foundation for Oracle RAC on page 75 discusses typical problems and provides recommendations for solving them. Also, you can use the scripts described in this chapter to generate information about your systems that VERITAS support personnel can use to assist you. xi

10 How this Guide Is Organized Section II: Using Oracle 10g on Red Hat Chapter 6. Installing Oracle 10g Software on Red Hat on page 95 describes how to set up shared storage and install Oracle 10g software on clean systems. Installation is typically on shared storage, but procedures for installing on local systems are also provided. Chapter 7. Upgrading and Migrating Oracle Software on Red Hat on page 119 covers Oracle9i to Oracle 10g migration, applying Oracle patchsets, and relinking VERITAS libraries after upgrades. Chapter 8. Configuring VCS Service Groups for Oracle 10g on Red Hat on page 137 describes configuring VCS service groups for Oracle 10g, the database, and the listener process in parallel mode. It describes configuring the agents within the service group. Chapter 9. Adding and Removing Cluster Nodes for Oracle 10g on Red Hat Linux on page 149 describes adding a node to a two-system cluster and removing a node from a three-system cluster. Chapter 10. Uninstalling Storage Foundation for Oracle RAC on Oracle 10g Systems on page 173 describes how to uninstall and remove the components on systems using Oracle 10g. Section III: Using Oracle9i on Red Hat Chapter 11. Installing Oracle9i Software on Red Hat on page 181 describes how to set up shared storage and install Oracle9i software on clean systems. Installation is typically on shared storage, but procedures for installing on local systems are also provided. Chapter 12. Configuring VCS Service Groups for Oracle 9i on Red Hat on page 203 describes configuring VCS service groups for Oracle9i, the database, and the listener process in parallel mode. It describes configuring the agents within the service group. Chapter 13. Adding and Removing Cluster Nodes Using Oracle9i on Red Hat on page 231 describes adding a node to a two-system cluster and removing a node from a three-system cluster. Chapter 14. Uninstalling Storage Foundation for Oracle RAC on Systems Using Oracle9i on page 253 describes how to uninstall and remove the components on systems using Oracle9i. Section IV: Using Oracle 10g on SUSE Chapter 15. Installing Oracle 10g Software on SLES9 on page 265 describes how to set up shared storage and install Oracle 10g software on clean systems. Installation is typically on shared storage, but procedures for installing on local systems are also provided. xii VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

11 How this Guide Is Organized Chapter 16. Configuring VCS Service Groups for Oracle 10g on SLES9 on page 299 describes configuring VCS service groups for Oracle 10g, the database, and the listener process in parallel mode. It describes configuring the agents within the service group. Chapter 17. Adding and Removing Cluster Nodes for Oracle 10g on SLES9 on page 311 describes adding a node to a two-system cluster and removing a node from a three-system cluster. Chapter 18. Uninstalling Storage Foundation for Oracle RAC on Oracle 10g Systems on page 335 describes how to uninstall and remove the components on systems using Oracle 10g. Section V: Appendixes Appendix A. Sample Configuration Files on page 343 contains procedures for creating starter databases using dbca or VERITAS scripts on raw shared volumes or in VxFS file systems. Chapter B. on page 351 provides a sample VCS configuration file main.cf for Oracle9i and Oracle 10g, which includes an example configuration of the Oracle and CVM service groups. Appendix C. Using Agents on page 361 provides details for the CVMCluster agent configured when you run the installsfrac script. It also describes the CVMVolDg and CFSMount agents that must be configured after installation. Appendix D. Tunable Kernel Driver Parameters on page 375 describes tunable parameters for the LMX and VXFEN drivers. Appendix E. Error Messages on page 379 lists LMX and VXFEN messages that you may encounter. Preface xiii

12 Documentation Documentation Components of Storage Foundation for Oracle RAC are described fully in the documents provided for VERITAS Volume Manager, VERITAS File System, and VERITAS Cluster Server. These documents provide important information on the software components that support CVM, VxFS, and VCS. Documentation is available as Adobe Portable Document Format (PDF) files on the product CD. Documentation is also available in HTML format on the documentation disc included with your software purchase. Storage Foundation for Oracle RAC Documentation Set Guide File Name VERITAS Storage Foundation for Oracle RAC VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide VERITAS Storage Foundation for Oracle RAC Release Notes sfrac_install.pdf sfrac_notes.pdf VERITAS Cluster Server VERITAS Cluster Server User s Guide VERITAS Cluster Server Installation Guide VERITAS Cluster Server Agent Developer s Guide VERITAS Cluster Server Bundled Agents Reference Guide vcs_users.pdf vcs_install.pdf vcs_agent_dev.pdf vcs_bundled_agents.pdf VERITAS Cluster Server Oracle Enterprise Agent VERITAS Cluster Server Enterprise Agent for Oracle Installation and Configuration Guide vcs_oracle_install.pdf VERITAS Storage Foundation VERITAS Storage Foundation Installation Guide VERITAS Storage Foundation Intelligent Storage Provisioning Administrator s Guide VERITAS Storage Foundation Cross-platform Data Sharing Administrator s Guide sf_install.pdf sf_isp_admin.pdf sf_cds_admin.pdf xiv VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

13 Documentation Guide VERITAS FlashSnap Point-in-Time Copy Solutions Administrator s Guide File Name flashsnap_admin.pdf VERITAS Volume Manager VERITAS Volume Manager Administrator s Guide VERITAS Volume Manager Hardware Notes VERITAS Volume Manager Troubleshooting Guide vxvm_admin.pdf vxvm_hwnotes.pdf vxvm_tshoot.pdf VERITAS File System VERITAS File System Administrator s Guide vxfs_admin.pdf VERITAS recommends copying the installation guide and release notes from the software discs containing the product to the /opt/vrts/docs directory so that they are available on your system for reference. Oracle Documentation Oracle documents are not shipped with VERITAS Storage Foundation 4.1 for Oracle RAC. Documents that provide necessary related information for Oracle9i: B Oracle Real Application Clusters Cluster File System Release Notes A Oracle9i Database Installation Guide For Oracle 10g: B Oracle Real Application Clusters Installation and Configuration 10g Release 1 ( ) (for Linux x86-64) B Oracle Real Application Clusters Installation and Configuration Guide (for Linux x86 and IA64) Preface xv

14 Conventions Conventions Convention Usage Example monospace monospace (bold) italic bold Used for path names, commands, output, directory and file names, functions, and parameters. Indicates user input. Identifies book titles, new terms, emphasized text, and variables replaced with a name or value. Depicts GUI objects, such as fields, list boxes, menu selections, etc. Also depicts GUI commands. Read tunables from the /etc/vx/tunefstab file. See the ls(1) manual page for more information. # ls pubs C:\> dir pubs See the User s Guide for details. The variable system_name indicates the system on which to enter the command. Enter your password in the Password field. Press Return. blue text Indicates hypertext links. See Getting Help on page xvii. % C shell prompt % setenv MANPATH /usr/share/man:/opt/vrts/man $ Bourne/Korn shell prompt $ MANPATH=/usr/share/man: /opt/ VRTS/man; export MANPATH # Unix superuser prompt (all shells). # cp /pubs/4.0/user_book /release_mgnt/4.0/archive xvi VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

15 Getting Help Getting Help For technical assistance, visit and select phone or support. This site also provides access to resources such as TechNotes, product alerts, software downloads, hardware compatibility lists, and our customer notification service. Use the Knowledge Base Search feature to access additional product information, including current and past releases of VERITAS documentation. Diagnostic tools are also available to assist in troubleshooting problems associated with the product. These tools are available on disc or can be downloaded from the VERITAS FTP site. See the README.VRTSspt file in the /support directory for details. For license information, software updates and sales contacts, visit For information on purchasing product documentation, visit Documentation Feedback Your feedback on product documentation is important to us. Send suggestions for improvements and reports on errors or omissions to Include the title and part number of the document (located in the lower left corner of the title page), and chapter and section titles of the text on which you are reporting. Our goal is to ensure customer satisfaction by providing effective, quality documentation. For assistance with topics other than documentation, visit Preface xvii

16 Documentation Feedback xviii VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

17 Section I. Using Storage Foundation for Oracle RAC Storage Foundation for Oracle RAC procedures are generally common across distributions of Linux and versions of Oracle. Operations that differ significantly according to environment are located in sections devoted to specific distributions of Linux and versions of Oracle. To use the chapters in this guide effectively 1. Use the overview, planning, and installing chapters to install and configure Storage Foundation for Oracle RAC. 2. Proceed to the section for your distribution of Linux and version of Oracle for: Installing and configuring Oracle Adding and removing nodes Uninstalling Storage Foundation for Oracle RAC 3. Use the I/O fencing, troubleshooting, sample configuration files, and other reference information as needed. The overview concepts, planning, installing, I/O fencing, and troubleshooting topics apply to all supported Linux distributions and versions of Oracle. Chapter 1. Overview: Storage Foundation for Oracle RAC on page 3 Chapter 2. Preparing for Installation of Storage Foundation for Oracle RAC on page 19 Chapter 3. Installing and Configuring Storage Foundation for Oracle RAC on page 33 Chapter 4. Using I/O Fencing on page 67 Chapter 5. Troubleshooting Storage Foundation for Oracle RAC on page 75 1

18

19 Overview: Storage Foundation for Oracle RAC 1 This chapter describes the Storage Foundation for Oracle RAC product (SFRAC) by examining its components and how each interacts to support Oracle RAC. What is RAC? Storage Foundation for Oracle RAC Overview Storage Foundation for Oracle RAC Communications Cluster Volume Manager (CVM) Cluster File System Oracle Disk Manager VERITAS Cluster Server Storage Foundation for Oracle RAC OSD Layer Support About I/O Fencing What is RAC? Real Application Clusters (RAC) is a parallel database environment that takes advantage of the processing power of multiple, interconnected computers. A cluster comprises two or more computers, also known as nodes or servers. In RAC environments, all nodes concurrently run Oracle instances and execute transactions against the same database. RAC coordinates each node s access to the shared data to provide consistency and integrity. Each node adds its processing power to the cluster as a whole and can increase overall throughput or performance. RAC serves as an important component of a robust high availability solution. A properly configured Real Application Clusters environment can tolerate failures with minimal downtime and interruption to users. With clients accessing the same database on multiple nodes, failure of a node does not completely interrupt access, as clients accessing the surviving nodes continue to operate. Clients attached to the failed node reconnect to a surviving node and resume access. Recovery after failure in a RAC environment is far quicker than a failover database, because another instance is already up and running. Recovery is a matter of applying outstanding redo log entries from the failed node. 3

20 What is RAC? RAC Architecture From a high level, RAC is multiple Oracle instances, accessing a single Oracle database and carrying out simultaneous transactions. An Oracle database is the physical data stored in tablespaces on disk. An Oracle instance represents the software processes necessary to access and manipulate the database. In traditional environments, only one instance accesses a database at a specific time. With Oracle RAC, multiple instances communicate to coordinate access to a physical database, greatly enhancing scalability and availability. RAC Architecture OCI Client Listener Listener 9i RAC Instance High Speed Interconnect 9i RAC Instance ODM Database ODM VCS CFS Datafiles & Control (files) CFS VCS CVM Index and Temp (files) CVM Online Redo Logs (files) Archive Logs (files) 4 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

21 What is RAC? Oracle Instance The Oracle instance consists of a set of processes and shared memory space that provide access to the physical database. The instance consists of server processes acting on behalf of clients to read data into shared memory as well as make modifications of it, and includes background processes to write changed data out to disk. In an Oracle RAC environment, multiple instances access the same database. This requires significant coordination between the instances to keep each instance s view of the data consistent. Operating System Dependent Layer Oracle RAC relies on several support services provided by VCS. The important features are cluster membership carried out by the cluster membership manager and inter-node communication. The implementation of these functions is described later in this chapter (see Storage Foundation for Oracle RAC OSD Layer Support on page 14). VERITAS Cluster Server Membership Manager (VCSMM) The VERITAS Cluster Server Membership Manager provides a global view of the cluster. VCSMM determines cluster membership and enforces protection of data by preventing systems outside the cluster from corrupting stored data. Cluster Inter-Process Communication (VCSIPC) RAC relies heavily on an underlying high-speed interprocess communication mechanism (VCSIPC). This mechanism defines the protocols and interfaces required for the RAC environment to transfer messages between instances. Lock Management and Cache Fusion Lock management coordinates access by multiple instances to the same data to maintain data consistency and integrity. Oracle s Cache Fusion provides memory-to-memory transfers of data blocks between RAC instances, which are faster than transfers made by writing to and then reading from disk across the cluster nodes. Shared Disk Subsystems RAC also requires that all nodes have simultaneous access to database storage. This gives multiple instances concurrent access to the same database. Chapter 1, Overview: Storage Foundation for Oracle RAC 5

22 Storage Foundation for Oracle RAC Overview Storage Foundation for Oracle RAC Overview Storage Foundation for Oracle RAC provides a complete I/O and communications stack to support Oracle RAC. It also provides monitoring and management of instance startup and shutdown. The following section describes the overall data and communications flow of the Storage Foundation for Oracle RAC stack. Data Stack Overview The various Oracle processes making up an instance (such as DB Writer, Log Writer, Checkpoint, Archiver, and others) read and write data to storage via the I/O stack shown in the diagram. Oracle communicates via the Oracle Disk Manager (ODM) interface to the VERITAS Cluster File System (CFS), which in turn accesses the storage via the VERITAS Cluster Volume Manager (CVM). Each of these components in the data stack are described in this chapter. Data Stack Overview LGWR ARCH CKPT DBWR ODM Oracle 9i RAC Instance Redo Log Files Archive Storage Oracle 9i RAC Instance LGWR ARCH CKPT DBWR ODM CFS CVM Disk I/O Data and Control Files Disk I/O CFS CVM The diagram details the overall data flow from an instance running on a server to the shared storage. Communications Stack Overview RAC instances must communicate to coordinate protection of data blocks in the database. ODM processes must communicate to coordinate data file protection and access across the cluster. CFS coordinates metadata updates for file systems, and finally CVM must coordinate the status of logical volumes and distribution of volume metadata across the cluster. VERITAS Cluster Server (VCS) controls starting and stopping of components in 6 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

23 Storage Foundation for Oracle RAC Overview the Storage Foundation for Oracle RAC stack and provides monitoring and notification on failure. VCS must communicate status of its resources on each node in the cluster. For the entire system to work, each layer must reliably communicate. Communications Stack Overview Cluster State VCS Core RAC ODM GAB LLT Cache Fusion/Lock Mgmt Datafile Management GAB LLT RAC ODM VCS Core CFS File System MetaData CFS CVM Volume Management CVM The diagram shows the data stack as well as the communications stack. Each of the components in the data stack requires communications with its peer on other systems to function properly. The diagram also shows Low Latency Transport (LLT) and the Group Membership Services/Atomic Broadcast (GAB), which make up the communications package central to the operation of Storage Foundation for Oracle RAC. During an operational steady state, the only significant traffic through LLT and GAB is due to Lock Management and Cache Fusion, while the traffic for the other data is relatively sparse. Chapter 1, Overview: Storage Foundation for Oracle RAC 7

24 Storage Foundation for Oracle RAC Communications Storage Foundation for Oracle RAC Communications Storage Foundation for Oracle RAC communications consist of: Low Latency Transport (LLT) Group Membership Services/Atomic Broadcast (GAB) Low Latency Transport (LLT) LLT provides fast, kernel-to-kernel communications, and monitors network connections. LLT functions as a high performance replacement for the IP stack. LLT runs directly on top of the Data Link Protocol Interface (DLPI) layer. The use of LLT rather than IP removes latency and overhead associated with the IP stack. LLT has several major functions: VCSIPC Traffic distribution Heartbeat VCSIPC RAC Inter-Process Communications uses the VCSIPC shared library for interprocess communication. In turn, VCSIPC uses LMX, an LLT multiplexer, to provide fast data transfer between Oracle processes on different cluster nodes. VCSIPC leverages all of LLT s features. Traffic Distribution LLT distributes (load balances) inter-node communication across all available private network links. All cluster communications are evenly distributed across as many as eight network links for performance and fault resilience. On failure of a link, traffic is redirected to remaining links. Heartbeat LLT is responsible for sending and receiving heartbeat traffic over network links. This heartbeat is used by the Group Membership Services function of GAB to determine cluster membership. 8 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

25 Group Membership Services/Atomic Broadcast (GAB) Group Membership Services/Atomic Broadcast (GAB) The Group Membership Services /Atomic Broadcast protocol (GAB), is responsible for: Cluster Membership Cluster Communications Cluster Membership A distributed system such as Storage Foundation for Oracle RAC requires all nodes to be aware of which nodes are currently participating in the cluster. Nodes can leave or join the cluster because of shutting down, starting up, rebooting, powering off, or faulting. Cluster membership is determined using LLT heartbeats. When systems no longer receive heartbeats from a peer for a predetermined interval, then a protocol is initiated to exclude the peer from the current membership. When systems start receiving heartbeats from a peer that is not part of the membership, then a protocol is initiated for enabling it to join the current membership. The new membership is consistently delivered to all nodes and actions specific to each module are initiated. For example, following a node fault, the Cluster Volume Manager must initiate volume recovery, and Cluster File System must do a fast parallel file system check. Cluster Communications GAB s second function is to provide reliable cluster communications, used by many Storage Foundation for Oracle RAC modules. GAB provides guaranteed delivery of point to point messages and broadcast messages to all nodes. Point to point messaging uses a send and acknowledgment. Atomic Broadcast ensures all systems within the cluster receive all messages. If a failure occurs while transmitting a broadcast message, GAB s atomicity ensures that, upon recovery, all systems have the same information. Cluster Communications Server NIC NIC GAB Messaging - Cluster Membership/State - Datafile Management - File System Metadata - Volume Management NIC NIC Server Chapter 1, Overview: Storage Foundation for Oracle RAC 9

26 Cluster Volume Manager (CVM) Cluster Volume Manager (CVM) Cluster Volume Manager is an extension of VERITAS Volume Manager (VxVM), the industry standard storage virtualization platform. CVM extends the concepts of VxVM across multiple nodes. Each node sees the same logical volume layout, and more importantly, the same state of all volume resources. In a Storage Foundation for Oracle RAC cluster, all storage is managed with standard VxVM commands from one node in the cluster. All other nodes immediately recognize any changes in disk group and volume configuration. CVM supports all performance enhancing capabilities, such as striping, mirroring, and mirror break-off (snapshot) for off-host backup. CVM Architecture Cluster Volume Manager is designed with a master/slave architecture. One node in the cluster acts as the configuration master for logical volume management, and all others are slaves. Any node can take over as master if the existing master fails. The CVM master is established on a per-cluster basis. Since CVM is an extension of VxVM, it operates in a very similar fashion. The volume manager configuration daemon, vxconfigd, maintains the configuration of logical volumes. Any changes to volumes are handled by vxconfigd, which updates the operating system at the kernel level when the new volume state is determined. For example, if a mirror of a volume fails, it is detached from the volume and the error is passed off to vxconfigd, which then determines the proper course of action, updates the new volume layout, and informs the kernel of a new volume layout. CVM extends this behavior across multiple nodes in a master-slave architecture. Changes to a volume are propagated to the master vxconfigd. The vxconfigd process on the master pushes these changes out to slave vxconfigd processes, each of which in turn updates the local kernel. CVM does not impose any write locking between nodes. Each node is free to update any area of the storage. All data integrity is the responsibility of the upper application. From an application perspective, logical volumes are accessed identically on a stand-alone system as on a CVM system. CVM imposes a Uniform Shared Storage model. All systems must be connected to the same disk sets for a given disk group. Any system unable to see the entire set of physical disks as other nodes for a given disk group cannot import the disk group. If a node loses contact with a specific disk, it is excluded from participating in the use of that disk. CVM uses GAB and LLT for transport of all its configuration data. 10 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

27 Cluster File System Cluster File System The VERITAS Cluster File System (CFS) is an extension of the industry standard VERITAS File System (VxFS). CFS allows the same file system to be simultaneously mounted on multiple nodes. Unlike other clustered file systems, CFS is a true SAN file system. All I/O takes place over the storage area network. Coordination between nodes is enabled by messages sent across the cluster interconnects. CFS Architecture CFS is designed with a primary/secondary architecture. Though any node can initiate an operation to create, delete, or resize data, the actual operation is carried out by the master node. Since CFS is an extension of VxFS, it operates in a very similar fashion. As with VxFS, it caches metadata and data in memory (typically called buffer cache or vnode cache). A distributed locking mechanism, called the Global Lock Manager (GLM) is used for metadata and cache coherency across the multiple nodes. GLM provides a way to ensure all nodes have a consistent view of the file system. When any node wishes to read data, it requests a shared lock. If another node wishes to write to the same area of the file system, it must request an exclusive lock. The GLM revokes all shared locks before granting the exclusive lock and informs reading nodes that their data is no longer valid. CFS Usage in Storage Foundation for Oracle RAC CFS is used in Storage Foundation for Oracle RAC to manage a file system in a large database environment. When used in Storage Foundation for Oracle RAC, Oracle accesses data files stored on CFS file systems with the ODM interface. This bypasses the file system buffer and file system locking for data. This means only Oracle buffers data and coordinates writing to files and not the GLM, which is minimally used with the ODM interface. A single point of locking and buffering ensures maximum performance. Oracle Disk Manager The Oracle Disk Manager (ODM) is a standard API specified by Oracle for doing database I/O. For example, when Oracle wishes to write, it calls the odm_io function. ODM improves both performance and manageability of the file system. The VERITAS-provided implementation of ODM improves performance by providing direct access for the database to the underlying storage without passing through the actual file system interface. This means the database sees performance that is equivalent to using raw devices. The administrator sees the storage as easy-to-manage file systems, including the support of resizing data files while in use. Chapter 1, Overview: Storage Foundation for Oracle RAC 11

28 VERITAS Cluster Server ODM Clustering Extensions All Oracle Disk Manager features have been extended to operate in a cluster environment. Nodes communicate with each other before performing any operation that could potentially affect another node. For example, before creating a new data file with a specific name, ODM checks with other nodes to see if the file name is already in use. VERITAS Cluster Server VCS in the Storage Foundation for Oracle RAC environment functions as a director of operations. It controls startup and shutdown of the component layers of RAC. In the Storage Foundation for Oracle RAC configuration, the RAC service groups run as parallel service groups. VCS does not attempt to migrate a failed service group, but it can be configured to restart it on failure. VCS also notifies users of any failures. Storage Foundation for Oracle RAC provides specific agents for VCS to operate in the Storage Foundation for Oracle RAC environment, including CVM, CFS and Oracle. VCS Architecture VCS communicates the status of resources running on each system to all systems in the cluster. The High Availability Daemon, or HAD, is the main VCS daemon running on each system. HAD collects all information about resources running on the local system, forwarding it to all other systems in the cluster. It also receives information from all other cluster members to update its own view of the cluster. Each type of resource supported in a cluster is associated with an agent. An agent is an installed program designed to control a particular resource type. Each system runs necessary agents to monitor and manage resources configured to run on that node. The agents communicate with HAD on the node. HAD distributes its view of resources on the local node to other nodes in the cluster using GAB and LLT. Storage Foundation for Oracle RAC Service Groups Storage Foundation for Oracle RAC uses parallel service groups to support RAC. There is one CVM group per server. This group has the CVM resource and the necessary resources for support of CFS. Users can modify this group to contain all common components needed by Oracle to support RAC. This includes a shared ORACLE_HOME directory and the Oracle Net Services process (LISTENER). Users also create a service group for each Oracle database on share storage, specifying the supporting CVM and CFS resources. 12 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

29 VERITAS Cluster Server Storage Foundation for Oracle RAC Service Groups Oracle Oracle Database Service Group CFSMount CFSMount CFSMount CVMVolDg Netlsnr CVM Service Group Parallel Service Group CFSMount IP NIC CFSfsckd CVMVolDg CVMCluster CVMVxconfigd Chapter 1, Overview: Storage Foundation for Oracle RAC 13

30 Storage Foundation for Oracle RAC OSD Layer Support Storage Foundation for Oracle RAC OSD Layer Support Storage Foundation for Oracle RAC OSD layer support includes: Cluster Membership (VCSMM) Inter-Process Communication (VCSIPC) Cluster Membership (VCSMM) Oracle provides an API for providing membership information to RAC. This is known as skgxn (system kernel generic interface node membership). Oracle RAC expects to make specific skgxn calls and get membership information required. In Storage Foundation for Oracle RAC, this is implemented as a library linked to Oracle when Oracle9i RAC is installed. The skgxn library in turn makes ioctl calls to a kernel module for membership information. This kernel module is known as VCSMM and obtains membership information from GAB. Oracle uses a dynamically linked library to communicate with VCSMM. Inter-Process Communications (VCSIPC) In order to coordinate access to a single database by multiple instances, Oracle uses extensive communications between nodes and instances. Oracle uses Inter-Process Communications (VCSIPC) for locking traffic and Cache Fusion. Storage Foundation for Oracle RAC uses LLT to support VCSIPC in a cluster and leverages its high performance and fault resilient capabilities. Oracle has defined an API for Inter-Process Communication that isolates Oracle from the underlying transport mechanism. Oracle conducts communication between processes, and does not have to know how the data is moved between systems. The Oracle API for VCSIPC is referred to as System Kernel Generic Interface Inter-Process Communications (skgxp). Storage Foundation for Oracle RAC provides a shared library that is dynamically loaded by Oracle at run-time to implement the skgxp functionality. This module communicates with the LLT Multiplexer (LMX) via ioctl calls. The LMX module is a kernel module designed to receive communications from the skgxp module and pass them on to the appropriate process on other nodes. The LMX module multiplexes communications between multiple related processes on the various cluster nodes. LMX leverages all features of LLT, including load balancing and fault resilience. 14 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

31 About I/O Fencing About I/O Fencing I/O fencing is a feature within a kernel module of Storage Foundation for Oracle RAC designed to guarantee data integrity, even in the case of faulty cluster communications causing a split brain condition. Understanding Split Brain and the Need for I/O Fencing Split brain is an issue faced by all cluster solutions. To provide high availability, the cluster must be capable of taking corrective action when a node fails. In Storage Foundation for Oracle RAC, this is carried out by the reconfiguration of CVM, CFS, and RAC to change membership. Problems arise when the mechanism used to detect the failure of a node breaks down. The symptoms look identical to a failed node. For example, if a system in a two-node cluster were to fail, it would stop sending heartbeats over the private interconnects and the remaining node would take corrective action. However, the failure of the private interconnects would present identical symptoms. In this case, both nodes would determine that their peer has departed and attempt to take corrective action. This typically results in data corruption when both nodes attempt to take control of data storage in an uncoordinated manner. In addition to a broken set of private networks, other scenarios can cause this situation. If a system were so busy as to appear hung, it would be declared dead. This can also happen on systems where the hardware supports a break and resume function. Dropping the system to PROM level with a break and subsequently resuming means the system could be declared as dead, the cluster could reform, and when the system returns, it could begin writing again. Storage Foundation for Oracle RAC uses a technology called I/O fencing to remove the risk associated with split brain. I/O fencing blocks access to storage from specific nodes. This means even if the node is alive, it cannot cause damage. SCSI-3 Persistent Reservations Storage Foundation for Oracle RAC uses a relatively new enhancement to the SCSI specification, known as SCSI-3 Persistent Reservations, (SCSI-3 PR). SCSI-3 PR is designed to resolve the issues of using SCSI reservations in a modern clustered SAN environment. SCSI-3 PR supports multiple nodes accessing a device while at the same time blocking access to other nodes. SCSI-3 PR reservations are persistent across SCSI bus resets and SCSI-3 PR also supports multiple paths from a host to a disk. By contrast, SCSI-2 reservations can only be used by one host, with one path. This means if there is a need to block access for data integrity concerns, only one host and one path remain active. The requirements for larger clusters, with multiple nodes reading and writing to storage in a controlled manner have made SCSI-2 reservations obsolete. Chapter 1, Overview: Storage Foundation for Oracle RAC 15

32 About I/O Fencing SCSI-3 PR uses a concept of registration and reservation. Systems wishing to participate register a key with a SCSI-3 device. Each system registers its own key. Multiple systems registering keys form a membership. Registered systems can then establish a reservation. This is typically set to Write Exclusive Registrants Only (WERO). This means registered systems can write, and all others cannot. For a given disk, there can only be one reservation, while there may be many registrations. With SCSI-3 PR technology, blocking write access is as simple as removing a registration from a device. Only registered members can eject the registration of another member. A member wishing to eject another member issues a preempt and abort command that ejects another node from the membership. Nodes not in the membership cannot issue this command. Once a node is ejected, it cannot in turn eject another. This means ejecting is final and atomic. In the Storage Foundation for Oracle RAC implementation, a node registers the same key for all paths to the device. This means that a single preempt and abort command ejects a node from all paths to the storage device. There are several important concepts to summarize: Only a registered node can eject another Since a node registers the same key down each path, ejecting a single key blocks all I/O paths from the node Once a node is ejected, it has no key registered and it cannot eject others The SCSI-3 PR specification describes the method to control access to disks with the registration and reservation mechanism. The method to determine who can register with a disk and when a registered member should eject another node is implementation specific. The following paragraphs describe Storage Foundation for Oracle RAC I/O fencing concepts and implementation. I/O Fencing Components I/O Fencing, or simply fencing, allows write access to members of the active cluster and blocks access to non-members. I/O fencing in Storage Foundation for Oracle RAC uses several components. The physical components are coordinator disks and data disks. Each has a unique purpose and uses different physical disk devices. Data Disks Data disks are standard disk devices used for data storage. These can be physical disks or RAID Logical Units (LUNs). These disks must support SCSI-3 PR. Data disks are incorporated in standard VxVM/CVM disk groups. In operation, CVM is responsible for 16 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

33 About I/O Fencing fencing data disks on a disk group basis. Since VxVM enables I/O fencing, several other features are provided. Disks added to a disk group are automatically fenced, as are new paths discovered to a device. Coordinator Disks Coordinator disks are special purpose disks in a Storage Foundation for Oracle RAC environment. Coordinator disks are three (or an odd number greater than three) standard disks, or LUNs, set aside for use by I/O fencing during cluster reconfiguration. The coordinator disks act as a global lock device during a cluster reconfiguration. This lock mechanism is used to determine who gets to fence off data drives from other nodes. From a high level, a system must eject a peer from the coordinator disks before it can fence the peer from the data drives. This concept of racing for control of the coordinator disks to gain the capability to fence data disks is key to understanding the split brain prevention capability of fencing. Coordinator disks cannot be used for any other purpose in the Storage Foundation for Oracle RAC configuration. The user must not store data on these disks, or include the disks in a disk group used by user data. The coordinator disks can be any three disks that support SCSI-3 PR. VERITAS typically recommends the smallest possible LUNs for coordinator use. Since coordinator disks do not store any data, cluster nodes need only register with them and do not need to reserve them. I/O Fencing Operation I/O fencing provided by the kernel-based fencing module (VXFEN) performs identically on node failures and communications failures. When the fencing module on a node is informed of a change in cluster membership by the GAB module, it immediately begins the fencing operation. The node immediately attempts to eject the key for departed node(s) from the coordinator disks using the preempt and abort command. When the node has successfully ejected the departed nodes from the coordinator disks, it ejects the departed nodes from the data disks. If this were a split brain scenario, both sides of the split would be racing for control of the coordinator disks. The side winning the majority of the coordinator disks wins the race and fences the loser. The loser then panics and reboots. Chapter 1, Overview: Storage Foundation for Oracle RAC 17

34 About I/O Fencing 18 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

35 Preparing for Installation of Storage Foundation for Oracle RAC 2 This chapter provides an overview of the installation and configuration procedures for VERITAS Storage Foundation for Oracle RAC. Hardware and software requirements for the cluster systems are also provided. Requirements for hardware, software, and preconfiguration are also covered. Supported installation options: New install of Storage Foundation for Oracle RAC 4.1 and Oracle 10g Release 1 on Red Hat or SLES9 New install of Storage Foundation for Oracle RAC 4.1 and Oracle9i Release 2 on Red Hat Linux New install of Storage Foundation for Oracle RAC 4.1 and migrate from Oracle9i to Oracle 10g Release 1 or Oracle 10g Release 2 on Red Hat Linux Note A direct upgrade from Storage Foundation for Oracle RAC 4.0 or 4.0 Maintenance Pack 1 to Storage Foundation for Oracle RAC 4.1 is not possible due to the OS kernel differences between them. 19

36 Installation Tasks Installation Tasks Some manual configuration tasks are required for all installation and upgrade options. Review the information provided in the flow charts to identify the correct procedures for your installation goals. Oracle Installation and Configuration Options Install SFRAC 4.1 Migrate from Oracle9i to Oracle 10g migrate Oracle version? Install 9i Apply Oracle patchset? No Install Oracle 9i Install 10g Yes Install Oracle 10g Apply Oracle 9i patchset Oracle installation is complete Oracle installation is complete Configure for VCS Configure for VCS Reboot Configuration complete 20 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

37 Installation Tasks Oracle Installation Tasks Tasks for Oracle 10g Install Storage Foundation for Oracle RAC 4.1 Reference Installing and Configuring Storage Foundation for Oracle RAC on page 33 Install Oracle 10g Installing Oracle 10g Software on Red Hat on page 95 Installing Oracle 10g Software on SLES9 on page 265 Apply Oracle 10g patchset Applying Oracle 10g Patchsets on Red Hat on page 132 Applying Oracle 10g Patchsets on SLES9 on page 294 Migrate from Oracle9i to Oracle 10g Migrating from Oracle9i to Oracle 10g on page 119 Configure VCS for Oracle 10g Configuring VCS Service Groups for Oracle 10g on Red Hat on page 137 Configuring VCS Service Groups for Oracle 10g on SLES9 on page 299 Tasks for Oracle9i Install Storage Foundation for Oracle RAC 4.1 Reference Installing and Configuring Storage Foundation for Oracle RAC on page 33 Install Oracle9i Installing Oracle9i Software on Red Hat on page 181 Install Oracle9i patchset Applying Oracle9i Patchsets on Red Hat on page 199 Configure VCS for Oracle9i Configuring VCS Service Groups for Oracle 9i on Red Hat on page 203 Chapter 2, Preparing for Installation of Storage Foundation for Oracle RAC 21

38 Reviewing the Requirements Reviewing the Requirements The hardware supported by Storage Foundation for Oracle RAC is detailed on the VERITAS support Web site: The compatibility list contains information about supported hardware and is updated regularly. Refer also to the VERITAS Storage Foundation 4.1 for Oracle RAC Release Notes for descriptions of the supported hardware and software. Note Before installing or upgrading sfrac, review the current compatibility list to confirm the compatibility of your hardware and software. Supported Software Storage Foundation for Oracle RAC operates on the following Linux operating system and kernels. Operating System Red Hat Enterprise Linux 4 Update 1 SUSE Linux Enterprise Server 9 Service Pack 1 SUSE Linux Enterprise Server 9 Service Pack 1 on x86_64 Kernel EL ELsmp ELhugemem default smp bigsmp smp default smp default For RHEL4, be sure you disable SELinux during OS installation. For RHEL4, be sure you disable the RHEL4 firewall during OS installation. For SLES9, be sure you do not use the auditing subsystem. ODM is not compatible with the auditing subsystem on SLES VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

39 Reviewing the Requirements Install Linux operating systems. Storage Foundation for Oracle RAC on Linux supports systems running: Oracle Version RHEL 4.0 SLES9 SP1 x86_32 x86_64 IA 64 x86_32 x86_64 IA Yes No No No No No N/A N/A N/A N/A N/A N/A No No No No Yes Yes Yes N/A N/A No N/A N/A 10g Release 2 N/A N/A N/A N/A N/A N/A Install Linux patches. No specific patches are required for Storage Foundation for Oracle RAC. However, Oracle9i Release 2 software on Red Hat Enterprise Linux 4.0 requires a number of patches. These patches, and other manual steps that need to be executed before installing Oracle, are available at: Use only versions of VCS, VxVM, and VxFS provided on the software CD. Other versions must be removed before the software from the VERITAS SStorage Foundation for Oracle RAC CD is installed. System Requirements Prerequisites for installing VERITAS Storage Foundation for Oracle RAC software: VERITAS Storage Foundation for Oracle RAC supports RAC clusters of up to eight nodes. We recommend that each system have two or more CPUs at 2GHz or higher. Verify local disk space for each system in the cluster: Storage Foundation for Oracle RAC requires 500 MB. Oracle9i on the local disk requires approximately 3.0 GB. Oracle 10g on the local disk requires approximately 3.0 GB. Storage Foundation for Oracle RAC and Oracle9i require a minimum of 512 MB in /tmp for the duration of the installation only. 1 GB or more of physical memory is recommended. Chapter 2, Preparing for Installation of Storage Foundation for Oracle RAC 23

40 Reviewing the Configuration Shared storage: The storage units used by Storage Foundation for Oracle RAC 4.1 must support and be enabled for SCSI-3 Persistent Reservations (PR), a requirement for I/O fencing. Required information for Oracle 10g installation: One public base IP address (in DNS) per node One public virtual IP address (in DNS) per node One private virtual IP address (non-routable) per node Preinstallation Requirements Obtain license keys. Refer to Obtaining License Keys on page 27. Obtain Oracle9i Release 2 or Oracle 10g Release 1. Synchronize cluster nodes. Refer to Synchronizing Cluster Nodes on page 28. Set up inter-system communication. Refer to Setting up Inter-system Communication on page 28. Reviewing the Configuration From a high level, when the installations of Storage Foundation for Oracle RAC and Oracle are completed and a starter database is created, the RAC cluster should have the following characteristics: The cluster systems are connected by at least two VCS private network links using 100 Base T or Gigabit Ethernet controllers on each system. For two-node clusters, cross-over Ethernet cables are acceptable. For three or more nodes, Ethernet hubs or switches are required. For maximum performance, VERITAS recommends the use of switches over hubs. In either case, a minimum of two switches or hubs should be used to provide necessary redundancy. If multiple links are present on a single switch, such as cases where three or four links are configured, a separate VLAN must be constructed for each link. The use of multiple links on a single hub is not supported. Each system is connected to shared storage devices using Fibre Channel switches. The use of shared SCSI is not supported with this product. For a complete list of supported Fibre Channel storage devices, see the current hardware compatibility list on the VERITAS Support Web site: Each system runs VERITAS Cluster Server software including I/O fencing, VERITAS Volume Manager with cluster features (CVM), VERITAS File System with cluster features (CFS), and Storage Foundation for Oracle RAC agents and components. 24 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

41 Reviewing the Configuration The Oracle RAC software is installed on either the shared storage accessible to all systems or locally on each system. The Oracle RAC database is configured on the shared storage (cluster file system) available to each of the systems. VCS is configured such that the agents direct and manage the resources required by Oracle RAC, which runs in parallel on each system. (Recommended but not required.) Chapter 2, Preparing for Installation of Storage Foundation for Oracle RAC 25

42 Reviewing the Configuration A Cluster Running Oracle RAC Clients Public Network LAN Private Network: Independent hub/switch per interconnect link Switches Disk Arrays Coordinator Disks 26 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

43 Obtaining License Keys Obtaining License Keys VERITAS Storage Foundation for Oracle RAC is a licensed software product. The installsfrac program prompts you for a license key for each system. You cannot use your VERITAS software product until you have completed the licensing process. Use either method described in the following two sections to obtain a valid license key. Using the VERITAS vlicense Web Site to Obtain a License Key You can obtain your license key most efficiently using the VERITAS vlicense web site. The License Key Request Form has all the information needed to establish a User Account on vlicense and generate your license key. The License Key Request Form is a one-page insert included with the CD in your product package. You must have this form to obtain a software license key for your VERITAS product. Note Do not discard the License Key Request Form. If you have lost or do not have the form for any reason, license@veritas.com. The License Key Request Form contains information unique to your VERITAS software purchase. To obtain your software license key, you need the following information shown on the form: VERITAS customer number Order number Serial number Follow the appropriate instructions on the vlicense Web site to obtain your license key depending on whether you are a new or previous user of vlicense: 1. Access the web site at 2. Log in or create a new user to log in, as necessary. 3. Follow the instructions on the pages as they are displayed. When you receive the generated license key, you can proceed with installation. Faxing the License Key Request Form to Obtain a License Key If you do not have Internet access, you can fax the License Key Request Form to VERITAS. Be advised that faxing the form generally requires several business days to process in order to provide a license key. Before faxing, sign and date the form in the appropriate spaces. Fax it to the number shown on the form. Chapter 2, Preparing for Installation of Storage Foundation for Oracle RAC 27

44 Preinstallation Configuration Preinstallation Configuration Check the following before installing Storage Foundation for Oracle RAC. Synchronizing Cluster Nodes We recommend all cluster nodes have the same time. If you do not run the Network Time Protocol (NTP) daemon, make sure the time settings on each cluster system are synchronized. On RHEL4, use the rdate command on each system to synchronize with the NTP server: # rdate -s timesrv On SLES9, use the yast command on each system to synchronize with the NTP server: # yast ntp-client Setting up Inter-system Communication If your systems have secure shell (ssh) configured, the installation program can successfully install Storage Foundation for Oracle RAC using ssh as long as ssh commands between systems in the cluster can execute without password prompting and confirmation. To use ssh with the VERITAS product installer, see step 3 on page 36. To use ssh with the Storage Foundation for Oracle RAC installation script, see step 3 on page 37. Otherwise, remote shell (rsh) access must be enabled between all nodes in the cluster for the duration of the installation and disk verification. Refer to the rsh manual page for more assistance configuring remote shell. Remove the remote rsh access permissions when they are no longer required for installation and disk verification. For example, if you added a + character in the first line of the.rhosts file, remove that line. For the complete procedure, see the VERITAS Cluster Server 4.1 Installation Guide for Linux. Setting up Shared Storage After installing the OS, preparation is required before installing Storage Foundation for Oracle RAC. Setting up shared storage must be visible to the SCSI layer from all the nodes in the cluster. This can be verified by looking at the disk names in /proc/scsi/scsi/ files on all nodes. For troubleshooting, see Shared Disks Not Visible on page VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

45 Preinstallation Configuration Configuring the I/O Scheduler VERITAS recommends using the Linux 'deadline' I/O scheduler for database workloads. Please configure your system to boot with the 'elevator=deadline' argument to select the 'deadline' scheduler. See: To determine whether a system is using the deadline scheduler: Look for "elevator=deadline" in /proc/cmdline. To configure a system to use the deadline scheduler 1. Include the elevator=deadline parameter in the boot arguments of the GRUB or ELILO configuration file. The location of the appropriate configuration file depends on the system s architecture and Linux distribution: Configuration File /boot/grub/menu.lst /boot/efi/efi/suse/elilo.conf Architecture and Distribution x86 (RHEL4 only) and x86_64 SLES9 IA64 For ELILO (IA64), add the elevator=deadline parameter to the append list. For example, change: image=vmlinuz el label= el initrd=initrd el.img read-only append="rhgb quiet root=label=/" To: image=vmlinuz el label= el initrd=initrd el.img read-only append="rhgb quiet root=label=/ elevator=deadline" Chapter 2, Preparing for Installation of Storage Foundation for Oracle RAC 29

46 Preinstallation Configuration For GRUB configuration files (x86 and x86_64), add the elevator=deadline parameter to the kernel command. For example, change: title RHEL AS 4 smp root (hd1,1) kernel /boot/vmlinuz elsmp ro root=/dev/sdb2 initrd /boot/initrd elsmp.img To: title RHEL AS 4 smp root (hd1,1) kernel /boot/vmlinuz elsmp ro root=/dev/sdb2 \ elevator=deadline initrd /boot/initrd elsmp.img A setting for the elevator parameter is always included by SUSE in its ELILO and GRUB configuration files. In this case, change the parameter from elevator=cfq to elevator=deadline. 2. Reboot the system once the appropriate file has been modified. See the operating system documentation for more information on I/O schedulers. SLES9 Network Configuration Before installing Storage Foundation for Oracle RAC on SLES9, some network configuration is required on SLES9. To configure a SLES9 network for Storage Foundation for Oracle RAC 1. If it is not already set to yes, set HOTPLUG_PCI_QUEUE_NIC_EVENTS in /etc/sysconfig/hotplug to yes : HOTPLUG_PCI_QUEUE_NIC_EVENTS=yes 2. Make sure that all ethernet devices are brought online at boot time by including them as MANDATORY_DEVICES. This ensures that LLT interfaces are available before configuration. For example: # grep MANDATORY_DEVICES /etc/sysconfig/network/config MANDATORY_DEVICES="eth-id-00:04:23:a5:99:bc eth-id-00:04:23:a5:99:bd eth-id-00:30:6e:4a:62:50 eth-id-00:30:6e:4a:63:5d" Each entry in the MANDATORY_DEVICES list is of the form eth-id-<macaddress>. 30 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

47 Preinstallation Configuration Make the appropriate entries in this list using the mac addresses of the interfaces present. 3. To ensure that the interface name to MAC address mapping remains the same across reboots, VERITAS requires that PERSISTENT_NAME entries be added to the configuration files for all the network interfaces, including those that are not currently used: a. Run: ifconfig -a. b. For each ethernet interface displayed: If it does not already exist, create a file named: /etc/sysconfig/network/ifcfg-eth-id-<mac> where <mac> is the hardware address of that interface. Make sure that <mac> contains lower case characters. If a file named ifcfg-eth0 exists then delete it. Add the following line at the end of this file:- PERSISTENT_NAME=<ethX> where ethx is the interface name for this interface. Example: Enter: ifconfig -a eth0 Link encap:ethernet HWaddr 00:02:B3:DB:38:FE inet addr: Bcast: Mask: inet6 addr: fe80::202:b3ff:fedb:38fe/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets: errors:0 dropped:0 overruns:0 frame:0 TX packets:8131 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes: (33.7 Mb) TX bytes: (976.4 Kb) Base address:0xdce0 Memory:fcf20000-fcf40000 If it does not already exist, create a file named: /etc/sysconfig/network/ifcfg-eth-id-00:02:b3:db:38:fe Add to the end of this file the following line: PERSISTENT_NAME=eth0 c. Repeat this procedure for all interfaces displayed by ifconfig -a. Chapter 2, Preparing for Installation of Storage Foundation for Oracle RAC 31

48 Information Required for the installsfrac Script Information Required for the installsfrac Script The installsfrac script is interactive. It prompts you for information about the cluster where you are installing Storage Foundation for Oracle RAC. For a smoother installation process, collect the following information before you start the script: General cluster configuration information required by installsfrac script: The name of the cluster (must begin with a letter of the alphabet (a-z, A-Z) The ID number of the cluster (each cluster within a site requires a unique ID) The host names of the systems in the cluster The device names for the private network links A valid license key for VERITAS Storage Foundation for Oracle RAC Web-based cluster manager information requested by the installsfrac script: The name of the public NIC for the VCS ClusterManager The virtual IP address for the NIC used by ClusterManager The netmask for the virtual IP address SMTP/SNMP (optional) information requested by the installsfrac script: Domain-based address of SMTP server address of SMTP notification recipients Severity level of events for SMTP notification System name for the SNMP console. Severity level of events for SNMP notification Note The installation of VCS using the installsfrac installation utility is essentially the same as that described in the VERITAS Cluster Server 4.1 Installation Guide. The VCS Installation Guide contains more extensive details about VCS installation. 32 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

49 Installing and Configuring Storage Foundation for Oracle RAC 3 This chapter describes the procedures for installing Storage Foundation for Oracle RAC. Be sure the systems meet the requirements for installation described in Reviewing the Requirements on page 22. High-level objectives and required tasks to complete each objective: Objective Tasks Setting the Environment Variables on page 34 Setting the PATH variable Setting the MANPATH variable Mounting the CD Starting Installation on page 36 Running the interactive installation script Installing VCS Installing Volume Manager enabled for clusters (CVM) Installing Cluster File System (CFS) Installing Storage Foundation for Oracle RAC modules Installing VCS enterprise agent for Oracle Configuring the Cluster and Optional Features on page 39 Configuring VCS cluster settings Configuring Cluster Manager Configuring notification options Installing the rpms on page 43 Configuring storage foundation either in step 1 or separately with installsfrac -configure. Starting VxVM and Specifying VxVM Information on page 44 Starting cluster components. 33

50 Installing Storage Foundation for Oracle RAC 4.1 Objective Tasks Verifying GAB Port Membership on page 47 Setting Up Shared Storage and I/O Fencing for SFRAC on page 49 Configuring CVM and CFS Manually (Optional) on page 62 Verifying GAB port membership Setting up the coordinator disks used by the I/O fencing feature into a disk group. Adding disks and testing disks Setting up coordinator disks Editing the VCS configuration Removing the /etc/vxfenmode file Starting I/O fencing Configuring CVM and CFS if not configured while running the installsfrac script For adding a node to a cluster already running Storage Foundation for Oracle RAC, see the following for instructions on how to install the software on the new node: Adding a Node to an Oracle 10g Cluster on page 149 (Red Hat) Adding a Node to an Oracle9i Red Hat Cluster on page 231 Adding a Node to an Oracle 10g Cluster on page 311 (SUSE) Installing Storage Foundation for Oracle RAC 4.1 The installsfrac script installs VERITAS Cluster Server (VCS), VERITAS Volume Manager (VxVM), and VERITAS File System (VxFS) on each system. Setting up rsh or ssh access for root user context between computer systems on which the installer is executed and the intended cluster nodes. The installer offers flexibility so that the installing node may or may not be part of the cluster nodes. Setting the Environment Variables The installation and other commands are located in various directories. On each system, add these directories to your PATH environment variable using the following commands: 34 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

51 Installing Storage Foundation for Oracle RAC 4.1 To set the PATH variable For the Bourne Shell (bash, sh or ksh): $ export PATH=$PATH:/usr/lib/vxvm/bin:/usr/lib/fs/vxfs:\ /opt/vrtsvxfs/cfs/bin:/opt/vrtsvcs/bin:\ /opt/vrtsvcs/ops/bin:/opt/vrtsob/bin:/etc/vx/bin For the C Shell (csh or tcsh): % setenv PATH $PATH:/usr/lib/vxvm/bin:/usr/lib/fs/vxfs:\ /opt/vrtsvxfs/sbin:/opt/vrtsvcs/bin:/opt/vrtsvcs/ops/bin:\ /opt/vrtsob/bin:/etc/vx/bin Note For root user, do not define paths to a cluster file system in the LD_LIBRARY_PATH variable. For example, define $ORACLE_HOME/lib in LD_LIBRARY_PATH for user oracle only. To set the MANPATH variable Set the MANPATH variable to enable viewing manual pages. For the Bourne Shell (bash, sh or ksh): $ export MANPATH=$MANPATH:/opt/VRTS/man For the C Shell (csh or tcsh): % setenv MANPATH $MANPATH:/opt/VRTS/man Note Some terminal programs may display garbage characters while viewing man pages. This issue can be resolved by setting the following environment variable: LC_ALL=C To mount the VERITAS Product CD 1. Insert the CD containing the VERITAS Storage Foundation for Oracle RAC 4.1 software in a CD-ROM drive connected to the system. 2. Enter: # mount /mnt/cdrom Chapter 3, Installing and Configuring Storage Foundation for Oracle RAC 35

52 Installing Storage Foundation for Oracle RAC 4.1 Starting Installation You can install VERITAS Storage Foundation for Oracle RAC on clusters of two to eight systems using the VERITAS product installer or the installsfrac utility. The procedures for starting installsfrac utility show an example installation on two systems, galaxy and nebula, using installsfrac, the script that installs VERITAS Storage Foundation for Oracle RAC. When you start the installation script and enter the systems for installation, the script: makes preliminary system checks adds infrastructure packages checks for required licenses lists the packages (rpms) to install when licensing is confirmed makes final system checks prior to installation. To run the VERITAS product installer 1. Log in as root on one of the systems for installation. 2. Change to the directory containing the installation program: # cd /mnt/cdrom 3. To install Storage Foundation for Oracle RAC using ssh (default): #./installer To install Storage Foundation for Oracle RAC using rsh: #./installer -usersh 4. The product installer begins by displaying copyright information. 5. From the opening Selection Menu, choose I to choose Install/Upgrade a Product. 6. From the displayed list of products to install, choose: VERITAS Storage Foundation for Oracle RAC. 7. When the installation program begins, it starts the product installation script by presenting a copyright message. Skip to step 4 of the next section, To use the installsfrac script on page 37, to continue the installation. 36 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

53 Installing Storage Foundation for Oracle RAC 4.1 To use the installsfrac script 1. Log in as root on one of the systems for installation. 2. Change to the directory containing the installation program: For RHEL4.0 on x86 # cd /mnt/cdrom/rhel4_x86/storage_foundation_for_oracle_rac For SLES9 on IA64: # cd /mnt/cdrom/sles9_ia64/storage_foundation_for_oracle_rac For SLES9 on x86_64: # cd /mnt/cdrom/sles9_x86_64/storage_foundation_for_oracle_rac 3. To install Storage Foundation for Oracle RAC: Using ssh (default): set up ssh keygen to use automatic authorization between the cluster nodes, then enter: #./installsfrac Using rsh: #./installsfrac -usersh The program begins by presenting a copyright message. It then prompts you to enter the names of the systems on which to install the software. 4. Enter the system names. For example: SFRAC: galaxy nebula 5. The installer checks that the systems are ready for installation. The installer verifies that: a. The operating system on each system is correct. b. The local system, where the script is running, communicates with the remote systems to verify system communication status. 6. At the conclusion of these checks, the installer: Creates a log called installsfracdate_time on each system in the directory /var/tmp. See Completing Installation on page 47 for information about this log and other files created during installation. Specifies the utilities it uses to communicate with the remote systems; typically this is rsh -x and rcp. Chapter 3, Installing and Configuring Storage Foundation for Oracle RAC 37

54 Installing Storage Foundation for Oracle RAC 4.1 Press Enter to continue. 7. Before installing any rpms, the installer checks whether the VERITAS infrastructure rpms, VRTScpi and VRTSvlic, are present on each installation system. If the VRTSvlic rpm is not present, the installer installs it on each system after checking for sufficient disk space. If the installed VRTSvlic rpm is not the current version, the installer prompts you to upgrade to the current version. The installer exits if you decline to upgrade. If the VRTScpi rpm is not present on the systems, it installs it on each of the systems. Press Enter to continue. 8. VERITAS Storage Foundation for Oracle RAC requires a license, which must be installed before installation. a. Install the license, if it is not already installed. b. Add licenses for other product licenses as needed. 9. The installer lists the rpms to be installed on each system. After presenting the rpms to be installed, the installer checks each system. If requirements for installation are not met, the installer stops and then indicates the actions required before proceeding. It determines if any of the rpms to be installed are already installed. It checks for the required file system space. It checks to determine the presence of any processes that could conflict with the installation. When the check completes successfully, press Enter to continue. 10. The installer can install all of the Storage Foundation for Oracle RAC rpms without performing any configuration. You could choose not to configure Storage Foundation for Oracle RAC, for example, if you are installing Storage Foundation for Oracle RAC on a node you are adding to an existing configured Storage Foundation for Oracle RAC cluster. Enter y to configure Storage Foundation for Oracle RAC now, or n to configure later. 11. As you proceed, the installer prompts you with questions and explains how you can respond to them. Press Enter to continue. 38 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

55 Configuring the Cluster and Optional Features Installing Storage Foundation for Oracle RAC 4.1 The next phase of installation is interactive. The installsfrac script prompts you for information to set up and configure the cluster. You can set up optional features of the cluster including Cluster Manager, SMTP notification, and SNMP trap notification. You are now prompted for VCS-related configuration information. Note that the cluster name and the cluster ID must be unique (see Information Required for the installsfrac Script on page 32). To configure VCS 1. Enter a unique cluster name: 2. Enter the cluster ID number between Configure the NICs. Note The eth0 adapter is typically the network interface card used for public network and should not be used for a private network. Example: Discovering NICs on galaxy... discovered eth0 eth1 eth2 eth3 Enter the NIC for the first private heartbeat NIC on galaxy [b,?] eth2 Would you like to configure a second private heartbeat link? [y,n,q,b,?] (y) Enter the NIC for the second private heartbeat NIC on nebula [b,?] eth3 Would you like to configure a third private heartbeat link? [y,n,q,b,?] (n) In this example, eth2 and eth3 are used for the private heartbeat NICs on all systems. You may use NICs with different device names on some of the cluster systems. If necessary, you may indicate they are different when the installer prompts you. Are you using the same NICs for private heartbeat links on all systems? [y,n,q,b,?] (y) Chapter 3, Installing and Configuring Storage Foundation for Oracle RAC 39

56 Installing Storage Foundation for Oracle RAC Confirm the information if it is correct. Example: Cluster Name: rac_cluster1 Cluster ID Number: 7 Private Heartbeat NICs for :galaxy link1=eth2 link2=eth3 Private Heartbeat NICs for :nebula link1=eth2 link2=eth3 Is this information correct? [y,n,q] (y) 5. On systems operating in an English locale, you can add VCS users now. For each user, enter: name password level of privileges You may also reset the password for the Admin user. Confirm the user information when prompted. 6. The configuration of Cluster Manager (Web Console) requires: the name of a NIC device a virtual IP address, a netmask for use by each system in the cluster Enter Y to configure Cluster Manager (Web Console) on the systems or N to skip configuring Cluster Manager. 7. Set the public NIC on the first system. a. Press Enter to confirm using the discovered NIC or type the name of a NIC to use and press Enter. b. Press Enter if all systems use the same public NIC. 8. Enter the virtual IP address to be used by Cluster Manager. 9. You can confirm the default netmask, or enter another. 40 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

57 Installing Storage Foundation for Oracle RAC The installer prompts you to verify Cluster Manager information. Example: Cluster Manager (Web Console) verification: NIC: eth0 IP: Netmask: Is this information correct? [y,n,q] (y) If the information is not correct, enter N. The installer prompts you to enter the information again. If the information is correct, press Enter. 11. The installer lists the information required to configure SMTP notification: the domain-based host name of the SMTP server the address of each SMTP recipient a minimum severity level of messages to send to each recipient Enter Y to configure SMTP or N to skip configuring SMTP notification. If you select N the installer advances you to the screen for configuring SNMP notification (see step 14). 12. Respond to the prompts and provide information to configure SMTP notification. Example: Enter the domain based hostname of the SMTP server: [b,?] smtp.example.com Enter the full address of the SMTP recipient: [b,?] admin@example.com Enter the minimum severity of events for which mail should be sent to ozzie@example.com [I=Information, W=Warning, E=Error, S=SevereError]: [b,?] w Would you like to add another SMTP recipient? [y,n,q,b] (n) n 13. Verify the SMTP notification information: If the information is not correct, enter N and enter the information again. If the information is correct, press Enter. Chapter 3, Installing and Configuring Storage Foundation for Oracle RAC 41

58 Installing Storage Foundation for Oracle RAC The installer covers the information required to configure SNMP trap notification: System names of SNMP consoles to receive VCS trap messages SNMP trap daemon port numbers for each console A minimum severity level of messages to send to each console To configure SNMP notification enter Y, to skip enter N. 15. Respond to the prompts and provide information to configure SNMP trap notification. Example: Enter the SNMP trap daemon port: [b,?] (162) Enter the SNMP console system name: [b,?] saturn Enter the minimum severity of events for which SNMP traps should be sent to saturn [I=Information, W=Warning, E=Error, S=SevereError]: [b,?] i Would you like to add another SNMP console? [y,n,q,b] (n) n 16. Verify the SNMP trap notification information: If the information is not correct, enter N and enter the information again. If the information is correct, press Enter. 17. Enter Cluster Volume Manager cluster reconfiguration timeout in seconds. For most configurations, the default of 200 seconds is sufficient. The Cluster Volume Manager (CVM) component is configured with a default reconfiguration timeout value of 200 seconds, which is appropriate for most clusters. 42 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

59 Installing Storage Foundation for Oracle RAC 4.1 Installing the rpms To install the rpms 1. With the configuration information, the installer can install the rpms on the cluster systems. You can choose to have the rpms installed consecutively on each cluster system, or simultaneously. Installing rpms on systems consecutively takes more time but allows for better error handling. 2. After you choose the sequence of installing the rpms, the installation process begins with the first steps, indicating the total number of steps required, based on the number of systems and the chosen configuration options. Notice that the installer copies rpms to remote systems before installing them. Example: Installing Storage Foundation for Oracle RAC 4.1 on galaxy: Copying VRTSvxvm rpm to galaxy... Done 1 of 132 steps Installing VRTSvxvm 4.1 on galaxy... Done 2 of 132 steps Copying VRTSvmman rpm to galaxy... Done 3 of 132 steps Installing VRTSvmman 4.1 on galaxy... Done 4 of 132 steps.. Copying VRTSdbac rpm to nebula... Done 129 of 132 steps Installing VRTSsfrac 4.1 on nebula... Done 130 of 132 steps Copying VRTScsocw rpm to nebula... Done 131 of 132 steps Installing VRTScsocw 4.1 on nebula... Done 132 of 135 steps Storage Foundation for Oracle RAC installation completed successfully. Press [Enter] to continue: Verifying that all NICs have PERSISTENT_NAME set correctly on galaxy: For VCS to run correctly, the names of the NIC cards must be boot persistent. Chapter 3, Installing and Configuring Storage Foundation for Oracle RAC 43

60 Installing Storage Foundation for Oracle RAC 4.1 Starting VxVM and Specifying VxVM Information Linux systems use device names such as /dev/sdk to identify disks on the system. It is possible to use the enclosure-based naming scheme, which allows disk arrays to be more readily recognizable; you can choose to use this naming scheme for Volume Manager. Dynamic Multipathing (DMP) is a prerequisite for enclosure-based naming schemes. To start VxVM and specify VxVM information 1. The install script evaluates systems for enclosure-based naming. When prompted, enter Y to set up the enclosure-based naming scheme. 2. Having installed all rpms and identified all configuration information, the installer creates the Storage Foundation for Oracle RAC configuration files and copies them to each cluster node. Press Enter to continue. Example: Configuring Storage Foundation for Oracle RAC: Creating Cluster Server configuration files... Done Copying configuration files to galaxy... Done Copying configuration files to nebula... Done Storage Foundation for Oracle RAC configured successfully. Press [Enter] to continue: 3. You can now start the Storage Foundation for Oracle RAC processes. 44 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

61 Installing Storage Foundation for Oracle RAC 4.1 Example: Starting SFRAC: Starting LLT on galaxy... Started Starting LLT on nebula... Started Starting GAB on galaxy... Started Starting GAB on nebula... Started Starting Cluster Server on galaxy...started Checking SFRAC drivers on galaxy... Checked Starting SFRAC drivers on galaxy... Started Checking SFRAC drivers on galaxy... Checked Starting Cluster Server on nebula... Started Checking SFRAC drivers on nebula... Checked Starting SFRAC drivers on nebula... Started Confirming SFRAC startup... 2 systems RUNNING Evaluating which systems can now be started... System galaxy is eligible -- can be started. System nebula is eligible -- can be started. Preparing to start VxVM on target system(s)... Begin starting VxVM on system galaxy Disabling enclosure-based naming... Done Starting vxconfigd for VxVM... Succeeded Done starting VxVM on system galaxy Begin starting VxVM on system nebula Disabling enclosure-based naming... Done Starting vxconfigd for VxVM... Succeeded Done starting VxVM on system nebula Done with starting VxVM on target system(s)... Press [Enter] to continue: 4. I/O fencing is started in disabled mode on all nodes of the cluster. I/O fencing must be configured and enabled after the installation of Storage Foundation for Oracle RAC has successfully completed. VERITAS does not support using I/O fencing in disabled mode in a Storage Foundation for Oracle RAC environment. See Setting Up Shared Storage and I/O Fencing for SFRAC on page 49 of this document for details on configuring I/O fencing in a cluster environment. Press Enter to continue. Chapter 3, Installing and Configuring Storage Foundation for Oracle RAC 45

62 Installing Storage Foundation for Oracle RAC Successful startup of the VxVM configuration daemons are confirmed and then the CFS agents for SFRAC started. Full activation of Cluster Manager, Cluster Volume Manager, and Cluster File System requires a reboot on all systems. 6. Some VERITAS Volume Manager (VxVM) commands require that a disk group be specified, the installer enables you to register the name of a default VxVM disk group (which can be created later) on each eligible cluster system. Setting up a default disk group is optional. a. When prompted, press Enter to set up a default disk group or N if you choose not to set up a default disk group. b. To set up a default disk group on the cluster systems, specify a disk group of the same name on each cluster system or provide a name for each node when prompted. c. The installer summarizes the disk group information and sets the disk groups when you confirm at the prompt. Example Evaluating which systems can now have their default disk group configured... System galaxy is eligible -- can set the default diskgroup. System nebula is eligible -- can set the default diskgroup. Do you want to set up the default disk group for each system? [y,n,q,?] (y) Will you specify one disk group name for all eligible systems? [y,n,q,?] (y) Please specify a default disk group for all systems or type l to display a listing of existing disk group(s). [?] oracledg Host: galaxy...disk group: oracledg Host: nebula...disk group: oracledg Is this correct? [y,n,q] (y) Setting default diskgroup to oracledg on galaxy...done Setting default diskgroup to oracledg on nebula...done 46 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

63 Verifying GAB Port Membership 7. The installer starts VxVM daemons and indicates the success of each process. Press Enter to continue. Completing Installation The completion of Storage Foundation for Oracle RAC is indicated by the following output: Installation of Storage Foundation for Oracle RAC 4.1 has completed successfully. The installation summary is saved at: /opt/vrts/install/logs/installsfracxxxxxxxxx.summary The installsfrac log is saved at: /opt/vrts/install/logs/installsfracxxxxxxxxx.log The installation response file is saved at: /opt/vrts/install/logs/installsfracxxxxxxxxx.response Note The summary, the log, and the response files contain the date as part of their names. Verifying GAB Port Membership After installing the VERITAS Storage Foundation 4.1 for Oracle RAC packages, verify the installation by running the command: /sbin/gabconfig -a Chapter 3, Installing and Configuring Storage Foundation for Oracle RAC 47

64 Verifying GAB Port Membership Example galaxy# /sbin/gabconfig -a GAB Port Memberships ============================================================== = Port a gen 4a1c0001 membership 01 Port b gen g8ty0002 membership 01 Port d gen membership 01 Port f gen f membership 01 Port h gen d membership 01 Port o gen f membership 01 Port v gen 1fc60002 membership 01 Port w gen 15ba0002 membership 01 The output of the gabconfig -a command displays which cluster systems have membership with the modules that have been installed and configured thus far in the installation. The first line indicates that each system (0 and 1) has membership with the GAB utility, which uses Port a. The ports listed, including port a, are configured for the following functions: GAB Port Membership Port a b d f h o Function GAB I/O fencing ODM (Oracle Disk Manager) CFS (Cluster File System) VCS (VERITAS Cluster Server: high availability daemon) VCSMM driver v CVM (Cluster Volume Manager) w vxconfigd (module for CVM) 48 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

65 Setting Up Shared Storage and I/O Fencing for SFRAC Setting Up Shared Storage and I/O Fencing for SFRAC Shared storage added for use with VERITAS Storage Foundation for Oracle RAC software must support SCSI-3 persistent reservations, a functionality that enables the use of I/O fencing to prevent data corruption caused by a split brain. See About I/O Fencing on page 15. For Storage Foundation for Oracle RAC, there are two types of shared storage: data disks, where the shared data is stored, and coordinator disks, which are small LUNs (typically three per cluster) used to control access to data disks by cluster nodes. High-level objectives and required tasks to complete each objective: Objective Tasks Adding Disks on page 50 Adding disks. Testing Disks for Data Storage Using vxfentsthdw on page 52 Editing VCS Configuration to Add the UseFence Attribute on page 59 Editing VCS Configuration to Add the UseFence Attribute on page 59 Starting I/O Fencing in Enabled Mode on page 60 Testing data disks. Setting up coordinator disks. Editing the VCS configuration file for I/O fencing. Stopping and starting each system to bring up the cluster configuration with I/O fencing enabled. Chapter 3, Installing and Configuring Storage Foundation for Oracle RAC 49

66 Setting Up Shared Storage and I/O Fencing for SFRAC Adding Disks After you physically add shared disks to cluster systems, you must initialize them as VxVM disks. Use the following examples or see the VERITAS Volume Manager Administrator s Guide for more information about adding and configuring disks. Note If you are using the enclosure-based naming method, the names are different. To add a disk 1. Determine the disks to add by checking the file: # cat /proc/scsi/scsi 2. Use the vxdisk scandisks command to: scan all disk drives and their attributes update the VxVM device list to reconfigure DMP with the new devices Example # vxdisk scandisks 3. To initialize the disks as VxVM disks, use either of two methods: Use the interactive vxdiskadm utility. VERITAS recommend specifying that the disk support Cross-platform Data Sharing (CDS) format. Refer to the disk by its VxVM name, such as sdz. Use the command vxdisksetup. The example that follows specifies the CDS format: vxdisksetup -i VxVM_device_name format=cdsdisk For example: # vxdisksetup -i sdz format=cdsdisk 50 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

67 Setting Up Shared Storage and I/O Fencing for SFRAC Verifying that Systems See the Same Disk To perform the test that determines whether a given disk (or LUN) supports SCSI-3 persistent reservations, two systems must simultaneously have access to the same disks. Because a given shared disk is likely to have a different name on each system, a way is needed to make sure of the identity of the disk. The method to check the identity of a given disk, or LUN, is to check its serial number. You can use the vxfenadm command with the -i option to verify that the same serial number for a LUN is returned on all paths to the LUN. For example, an EMC array is accessible by path /dev/sdz on node A and by path /dev/sdaa on node B. From node A, the command is given: # vxfenadm -i /dev/sdz Vendor id : EMC Product id : SYMMETRIX Revision : 5567 Serial Number : a The same serial number information should be returned when the equivalent command is given on node B using path /dev/sdz. On a disk from another manufacturer, Hitachi Data Systems, for example, the output is different. It may resemble: # vxfenadm -i /dev/sdz Vendor id : HITACHI Product id : OPEN-3 Revision : 0117 Serial Number : 0401EB6F0002 Refer to the vxfenadm(1m) manual page. Chapter 3, Installing and Configuring Storage Foundation for Oracle RAC 51

68 Setting Up Shared Storage and I/O Fencing for SFRAC Testing Disks for Data Storage Using vxfentsthdw Use the vxfentsthdw utility to test the shared storage arrays that are to be used for data. The utility verifies the disks support SCSI-3 persistent reservations and I/O fencing. Note Disks used as coordinator disks must also be tested. See Editing VCS Configuration to Add the UseFence Attribute on page 59. General Guidelines for Using vxfentsthdw Connect the shared storage to be used for data to two of the systems where you installed Storage Foundation for Oracle RAC. Caution The tests overwrite and destroy data on the disks, unless you use the -r option. The two systems must have either ssh or rsh configured so that each node has root user access to the other without password prompts. Make sure both systems are connected to the same disk during the testing, using the vxfenadm -i diskpath command to verify a disk s serial number. See Verifying that Systems See the Same Disk on page 51. Running vxfentsthdw This section describes the steps required to set up and test data disks for your initial installation. It describes using the vxfentsthdw utility with the default options. The vxfentsthdw utility indicates a disk can be used for I/O fencing with a message resembling: The disk /dev/sdz is ready to be configured for I/O Fencing on node nebula If the utility does not show a message stating a disk is ready, verification has failed. Assume, for example, you must check a shared device known by two systems as /dev/sdz. (Each system could use a different name for the same device.) To run vxfentsthdw 1. Make sure system-to-system communication is setup. 2. By default the vxfentsthdw utility uses rsh to communicate with remote systems. ssh can be used for remote communication by using the -s option when invoking the vxfentsthdw utility. 52 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

69 Setting Up Shared Storage and I/O Fencing for SFRAC To start the vxfentsthdw utility using ssh: # /opt/vrtsvcs/vxfen/bin/vxfentsthdw To start the vxfentsthdw utility using rsh: # /opt/vrtsvcs/vxfen/bin/vxfentsthdw -s The utility begins by providing an overview of its function and behavior. It warns you that its tests overwrite any data on the disks you check: ******** WARNING!!!!!!!! ******** THIS UTILITY WILL DESTROY THE DATA ON THE DISK!! Do you still want to continue : [y/n] (default: n) y Enter the first node of the cluster: galaxy Enter the second node of the cluster: nebula 3. Enter the name of the disk you are checking. For each node, the disk may be known by the same name, as in our example. Enter the disk name to be checked for SCSI-3 PR on node galaxy in the format: /dev/sdz /dev/sdz Enter the disk name to be checked for SCSI-3 PR on node nebula in the format: /dev/sdz Make sure it s the same disk as seen by nodes galaxy and nebula /dev/sdz Note the disk names, whether or not they are identical, must refer to the same physical disk, or the testing terminates without success. 4. The utility starts to perform the check and report its activities. For example: Testing galaxy /dev/sdaa nebula /dev/sdz Registering keys on disk /dev/sdy from node galaxy...passed. Verifying registrations for disk /dev/sdz on node galax..passed. Reads from disk /dev/sdz on node galaxy...passed. Writes to disk /dev/sdz from node galaxy...passed. Reads from disk /dev/sdz on node nebula...passed. Writes to disk /dev/sdz from node nebula...passed. Reservations to disk /dev/sdaa from node galaxy...passed... Chapter 3, Installing and Configuring Storage Foundation for Oracle RAC 53

70 Setting Up Shared Storage and I/O Fencing for SFRAC 5. For a disk that is ready to be configured for I/O fencing on each system, the utility reports success. For example: ALL tests on the disk /dev/sdaa have PASSED The disk is now ready to be configured for I/O Fencing on node galaxy ALL tests on the disk /dev/sdz have PASSED The disk is now ready to be configured for I/O Fencing on node nebula Cleaning up... Removing temporary files... Done. 6. Run the vxfentsthdw utility for each disk you intend to verify. Note The vxfentsthdw utility has additional options suitable for testing many disks. The options for testing disk groups (-g) and disks listed in a file (-f) are described in detail in General Guidelines for Using vxfentsthdw on page 68. You can also test disks without destroying data using the -r option. Setting up Coordinator Disks I/O Fencing requires coordinator disks configured in a disk group accessible to each system in the cluster. The use of coordinator disks enables the vxfen driver to resolve potential split brain conditions and prevent data corruption. See the topic About I/O Fencing on page 15 for a discussion of I/O fencing and the role of coordinator disks. See also Using I/O Fencing on page 67 for additional description of how coordinator disks function to protect data in different split brain scenarios. A coordinator disk is not used for data storage, and so may be configured as the smallest possible LUN on a disk array to avoid wasting space. Requirements for Coordinator Disks Coordinator disks must meet the following requirements: There must be at least three coordinator disks and the total number of coordinator disks must be an odd number. This ensures a majority of disks can be achieved. Each of the coordinator disks must use a physically separate disk or LUN. Each of the coordinator disks should be on a different disk array, if possible. Each disk must be initialized as a VxVM disk. The default (CDS) format is recommended. 54 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

71 Setting Up Shared Storage and I/O Fencing for SFRAC The coordinator disks must support SCSI-3 persistent reservations. See Requirements for Testing the Coordinator Disk Group on page 56. The coordinator disks must be included in a disk group named (vxfencoorddg). See Setting up the vxfencoorddg Disk Group on page 55. VERITAS recommends that coordinator disks use hardware-based mirroring. Setting up the vxfencoorddg Disk Group If you have already added and initialized disks you intend to use as coordinator disks, you can begin the following procedure at step 4. To set up the vxfencoorddg disk group 1. Physically add the three disks you intend to use for coordinator disks. Add them as physically shared by all cluster systems. It is recommended you use the smallest size disks/luns, so that space for data is not wasted. 2. If necessary, use the vxdisk scandisks command to scan the disk drives and their attributes. This command updates the VxVM device list and reconfigures DMP with the new devices. For example: # vxdisk scandisks 3. You can use the command vxdisksetup command to initialize a disk as a VxVM disk. The example command that follows specifies the CDS format: vxdisksetup -i vxvm_device_name format=cdsdisk For example: # vxdisksetup -i sdh format=cdsdisk Repeat this command for each disk you intend to use as a coordinator disk. 4. From one system, create a disk group named: vxfencoorddg. This group must contain an odd number of disks/luns and a minimum of three disks. For example, assume the disks have the device names sdh, sdj, and sdk. On any node, create the disk group by specifying the device name of the disks. # vxdg -t init vxfencoorddg sdh sdj sdk Refer to the VERITAS Volume Manager Administrator s Guide for more information about creating disk groups. Chapter 3, Installing and Configuring Storage Foundation for Oracle RAC 55

72 Setting Up Shared Storage and I/O Fencing for SFRAC Requirements for Testing the Coordinator Disk Group The utility requires that the coordinator disk group, vxfencoorddg, be accessible from two systems. For example, if you have a four-system cluster, select any two systems for the test. The two systems being tested must be able to communicate over ssh or rsh. If ssh is used, the vxfentsthdw utility should be launched with -s option. Make sure both systems are connected to the same disks during the testing, you can use the vxfenadm -i diskpath command to verify a disk s serial number. See Verifying that Systems See the Same Disk on page 51 above. Testing the Coordinator Disk Group In the example that follows, the three disks are tested by the vxfentsthdw utility, one disk at a time from each node. From the node galaxy, the disks are sdz, sdy, and sdx. From the node nebula, the same disks are seen as sdh, sdi, sdj. To use vxfentsthdw -c 1. Use the vxfentsthdw command with the -c option. For example: # /opt/vrtsvcs/vxfen/bin/vxfentsthdw -c vxfencoorddg 2. The script prompts you for the names of the systems you are using to test the coordinator disks. Enter the first node of the cluster: galaxy Enter the second node of the cluster: nebula ******************************************** Testing galaxy /dev/sdz nebula /dev/sdh Evaluating the disk before testing... pre-existing keys. Registering keys on disk /dev/sdz from node galaxy... Passed Verifying registrations for disk /dev/sdz on node galaxy.. Passed. Registering keys on disk /dev/sdh from node nebula... Passed. Verifying registrations for disk /dev/sdz on node galaxy.. Passed. Verifying registrations for disk /dev/sdh on node nebula.. Passed. Preempt and aborting key KeyA using key KeyB on node nebula VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

73 Setting Up Shared Storage and I/O Fencing for SFRAC Passed. Verifying registrations for disk /dev/sdz on node galaxy.. Passed. Verifying registrations for disk /dev/sdh on node nebula.. Passed. Removing key KeyB on node nebula... Passed. Check to verify there are no keys from node galaxy... Passed. ALL tests on the disk /dev/sdz have PASSED. The disk is now ready to be configured for I/O Fencing on node galaxy as a COORDINATOR DISK. ALL tests on the disk /dev/sdh have PASSED. The disk is now ready to be configured for I/O Fencing on node nebula as a COORDINATOR DISK. ******************************************** Testing galaxy /dev/sdz nebula /dev/sdh.. The preceding shows the output of the test utility as it tests one disk. The vxfencoorddg disk group is ready for use when all disks in the disk group are successfully tested. Removing and Adding a Failed Disk If a disk in the coordinator disk group fails verification, remove the failed disk or LUN from the vxfencoorddg disk group, replace it with another, and retest the disk group. Use the vxdiskadm utility to remove the failed disk from the disk group. Refer to the VERITAS Volume Manager Administrator s Guide. Add a new disk to the system, initialize it, and add it to the coordinator disk group. See Setting up the vxfencoorddg Disk Group on page 55. Retest the disk group. See Requirements for Testing the Coordinator Disk Group on page 56. Note If you need to replace a disk in an active coordinator disk group, refer to the troubleshooting procedure, Adding or Removing Coordinator Disks on page 87. Chapter 3, Installing and Configuring Storage Foundation for Oracle RAC 57

74 Setting Up Shared Storage and I/O Fencing for SFRAC Creating /etc/vxfendg to Set Up the Disk Group for I/O Fencing When you have set up and tested the coordinator disk group, configure it for use. To configure the coordinator disk group 1. Deport the disk group: # vxdg deport vxfencoorddg 2. Import the disk group with the -t option so that it is not automatically imported when the systems are restarted: # vxdg -t import vxfencoorddg 3. Deport the disk group. Deporting the disk group prevents the coordinator disks from being used for other purposes. # vxdg deport vxfencoorddg 4. On all systems, enter the command: # echo "vxfencoorddg" > /etc/vxfendg No spaces should appear between the quotes in the "vxfencoorddg" text. This command creates the file /etc/vxfendg, which includes the name of the coordinator disk group. Based on the contents of the /etc/vxfendg file, the rc script creates the file /etc/vxfentab for use by the vxfen driver when the system starts. The /etc/vxfentab file is a generated file and should not be modified. 5. Go to Editing VCS Configuration to Add the UseFence Attribute on page 59 to edit the main.cf file and add the UseFence = SCSI3 attribute to the VCS configuration. 58 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

75 Setting Up Shared Storage and I/O Fencing for SFRAC Editing VCS Configuration to Add the UseFence Attribute After adding coordinator disks and configuring I/O fencing, edit the VCS configuration file, /etc/vrtsvcs/conf/config/main.cf, and add the UseFence cluster attribute. To edit the VCS configuration file 1. Save the existing configuration: # haconf -dump -makero 2. Stop VCS on all nodes. # hastop -all 3. Make a backup copy of the main.cf file: # cd /etc/vrtsvcs/conf/config # cp main.cf main.orig 4. On one node, use vi or another text editor to edit the main.cf file. Modify the list of cluster attributes by adding the attribute, UseFence, and assign its value of SCSI3. For example, with the attribute added this portion of the file resembles: cluster rac_cluster1 ( UserNames = { admin = "cdrpdxpmhpzs." } Administrators = { admin } HacliUserLevel = COMMANDROOT CounterInterval = 5 UseFence = SCSI3 ) 5. Save and close the file. 6. Verify the syntax of the file /etc/vrtsvcs/conf/config/main.cf: # hacf -verify /etc/vrtsvcs/conf/config 7. Using rcp, or some other utility, copy the VCS configuration file from galaxy, for example, to the remaining cluster nodes. For example, on each remaining node, enter: # rcp galaxy:/etc/vrtsvcs/conf/config/main.cf /etc/vrtsvcs/conf/config Chapter 3, Installing and Configuring Storage Foundation for Oracle RAC 59

76 Setting Up Shared Storage and I/O Fencing for SFRAC Removing the /etc/vxfenmode File After I/O Fencing has been configured on all cluster nodes, the /etc/vxfenmode file must be removed. The contents of this file cause the I/O Fencing to startup in disabled mode. To remove the /etc/vxfenmode Execute the following on all nodes of the cluster: # rm /etc/vxfenmode Starting I/O Fencing in Enabled Mode Once I/O fencing has been set up on all the nodes of the cluster, I/O fencing and VCS must first be stopped and then restarted on all nodes of the cluster. Note The following steps must be executed in the order presented. To stop VCS On any node, enter: # /opt/vrts/bin/hastop -all To stop I/O fencing 1. On each node, enter: # /sbin/service vxfen stop 2. To start I/O fencing, reboot all nodes: # reboot 60 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

77 Setting Up Shared Storage and I/O Fencing for SFRAC To verify the I/O Fencing configuration Enter: # vxfenadm -d The output of this command should look similar to: I/O Fencing Cluster Information: ================================ Cluster Members: * 0 (galaxy) 1 (nebula) RFSM State Information: node 0 in state 8 (running) node 1 in state 8 (running) Note If the command returns that I/O Fencing is running in Disabled Mode, then I/O Fencing has not been configured properly. Verify that each step listed in the Setting Up Shared Storage and I/O Fencing for SFRAC on page 49 has been properly performed. To start VCS On each node, enter: # /opt/vrts/bin/hastart Note After completion of the above steps, verify GAB port membership, see Verifying GAB Port Membership on page 47. An Example /etc/vxfentab File On each system, the coordinator disks are listed in the file /etc/vxfentab. The same disks may be listed using different names on each system. An example /etc/vxfentab file on one system resembles: /dev/sdz /dev/sdy /dev/sdx When the system starts, the rc startup script automatically creates /etc/vxfentab and then invokes the vxfenconfig command, which configures the vxfen driver to start and use the coordinator disks listed in /etc/vxfentab. Chapter 3, Installing and Configuring Storage Foundation for Oracle RAC 61

78 Configuring CVM and CFS Manually (Optional) Configuring CVM and CFS Manually (Optional) If you did not configure CVM and CFS while running the installsfrac script, you can manually start the cfscluster utility and configure CVM and CFS. To configure CVM and CFS manually 1. Start the cfscluster utility with the config option: # cd /opt/vrtsvxfs/cfs/bin #./cfscluster config The cluster configuration information as read from cluster configuration file is as follows. Cluster : rac_cluster1 Nodes : galaxy nebula You will now be prompted to enter the information pertaining to the cluster and the individual nodes. 2. You can choose a different timeout value, if necessary, or press Enter to accept the default: Enter the timeout value in seconds. This timeout will be used by CVM in the protocol executed during cluster reconfiguration. Choose the default value, if you do not wish to enter any specific value. Timeout: [<timeout>,q,?] (default: 200) 3. When additional cluster information is displayed, gab is shown as the configured transport protocol, meaning GAB is to provide communications between systems in the cluster. For VERITAS Storage Foundation 4.1 for Oracle RAC, GAB is required Following is the summary of the information: Cluster : rac_cluster1 Nodes : galaxy nebula Transport : gab Hit RETURN to continue. Is this correct [y,n,q,?] (default: y) y The configuration of CVM begins if you enter y. 62 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

79 Example VCS Configuration File After Installation 4. After the CVM configuration has completed, verify that the configuration was successful using the hagrp command: galaxy # hagrp -state cvm #Group Attribute System Value cvm State galaxy ONLINE cvm State nebula ONLINE Example VCS Configuration File After Installation To verify the installation, you can examine the VCS configuration file, main.cf, in the directory /etc/vrtsvcs/conf/config. See main.cf File Created During Installation: an Example on page 64, for an example of a configuration file created during installation. Note the following about the VCS configuration file after Storage Foundation for Oracle RAC installation: The include statements list types files for VCS (types.cf), CFS (CFSTypes.cf), and CVM (CVMTypes.cf); these files are in the /etc/vrtsvcs/conf/config directory. The types file for the Oracle enterprise agent (OracleTypes.cf) is also located in /etc/vrtsvcs/conf/config.these files define the agents that control the resources in the cluster. The VCS types include all agents bundled with VCS. Refer to the VERITAS Bundled Agents Reference Guide for information about all VCS agents. The CFS types include the CFSMount and CFSfsckd. The CFSMount agent mounts and unmounts the shared volume file systems. The CFSfsckd is a type defined for the cluster file system daemon and does not require user configuration. The CVM types include the CVMCluster, CVMVolDg, and the CVMVxconfigd types. The CVMCluster resource, which is automatically configured during installation, starts CVM in the cluster, controls system membership in the cluster, and defines how systems in the cluster are to communicate the volume state information. Refer to Using Agents on page 361. The CVMVolDg resource imports disk groups and sets activation modes. The CVMVxconfigd resource monitors the VERITAS Volume Manager configuration daemon. The Oracle enterprise agent types file includes definitions for the Oracle agent and the Netlsnr agent. The Oracle agent monitors the resources for an Oracle database, and the Netlsnr resource manages the resources for the listener process. Chapter 3, Installing and Configuring Storage Foundation for Oracle RAC 63

80 Example VCS Configuration File After Installation The cluster systems are listed. In the example, main.cf File Created During Installation: an Example on page 64, two systems are listed. Sample Configuration Files on page 343 shows an example cluster configuration with four systems. The service group cvm is added. It includes definitions for monitoring the CFS and CVM resources. The CVMCluster agent resource definition indicates how the systems in the cluster are to communicate volume state information. Later, after the installation of Oracle, the cvm group must be modified to include Netlsnr, IP, and NIC resources required for the listener process. The cvm group has the parallel attribute set to 1, which means that the resources are set to run in parallel on each system in the system list. main.cf File Created During Installation: an Example The following is an example of a VCS configuration file created during installation. The configuration file created on your system is located at: /etc/vrtsvcs/conf/config/main.cf: include "vcsapachetypes.cf" include "types.cf" include "CFSTypes.cf" include "CVMTypes.cf" include "OracleTypes.cf" cluster rac_test01 ( UserNames = { admin = dqrjqlqnrmrrpzrlqo } Administrators = { admin } HacliUserLevel = COMMANDROOT CounterInterval = 5 ) system galaxy ( ) system nebula ( ) group cvm ( SystemList = { galaxy = 0, nebula = 1 } AutoFailOver = 0 Parallel = 1 AutoStartList = { galaxy, nebula } ) CFSfsckd vxfsckd ( ) 64 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

81 Example VCS Configuration File After Installation CVMCluster cvm_clus ( CVMClustName = rac_test01 CVMNodeId = { galaxy = 0, nebula = 1 } CVMTransport = gab CVMTimeout = 200 ) CVMVxconfigd cvm_vxconfigd ( Critical = 0 CVMVxconfigdArgs = { syslog } ) cvm_clus requires cvm_vxconfigd vxfsckd requires cvm_clus // resource dependency tree // // group cvm // { // CFSfsckd vxfsckd // { // CVMCluster cvm_clus // { // CVMVxconfigd cvm_vxconfigd // } // } // } Chapter 3, Installing and Configuring Storage Foundation for Oracle RAC 65

82 Example VCS Configuration File After Installation 66 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

83 Using I/O Fencing 4 This chapter discusses I/O fencing as it functions in response to certain event scenarios. About I/O Fencing of Shared Storage When multiple systems have access to the data on shared storage, the integrity of the data depends on the systems communicating with each other so that each is aware when the other is writing data. Usually this communication occurs in the form of heartbeats through the private networks between the systems. If the private links are lost, or even if one of the systems is hung or too busy to send or receive heartbeats, each system could be unaware of the other s activities with respect to writing data. This is a split brain condition and can lead to data corruption. The I/O fencing capability of the Storage Foundation for Oracle RAC, managed by VERITAS Volume Manager, prevents data corruption in the event of a split brain condition by using SCSI-3 persistent reservations for disks. This allows a set of systems to have registrations with the disk and a write-exclusive registrants-only reservation with the disk containing the data. This means that only these systems can read and write to the disk, while any other system can only read the disk. The I/O fencing feature fences out a system that no longer sends heartbeats to the other system by preventing it from writing data to the disk. VxVM manages all shared storage subject to I/O fencing. It assigns the keys that systems use for registrations and reservations for the disks including all paths in the specified disk groups. The vxfen driver is aware of which systems have registrations and reservations with specific disks. To protect the data on shared disks, each system in the cluster must be configured to use I/O fencing. 67

84 Verifying Data Storage Arrays Using the vxfentsthdw Utility Verifying Data Storage Arrays Using the vxfentsthdw Utility You can use the vxfentsthdw utility to verify that shared storage arrays to be used for data support SCSI-3 persistent reservations and I/O fencing. The description in Chapter 3 shows how to use the testing utility to test a single disk. The utility has other options that may be more suitable for testing storage devices in other configurations. Note Disks used as coordinator disks must also be tested. See Setting up Coordinator Disks on page 54. The utility, which you can run from one system in the cluster, tests the storage used for data by setting and verifying SCSI-3 registrations on the disk or disks you specify, setting and verifying persistent reservations on the disks, writing data to the disks and reading it, and removing the registrations from the disks. Refer also to the vxfentsthdw(1m) manual page. General Guidelines for Using vxfentsthdw The utility requires two systems connected to the shared storage. For example, if you have a four-system cluster, you may select any two systems for the test. Caution The tests overwrite and destroy data on the disks, unless you use the -r option. The two systems being tested must be able to communicate using ssh or rsh. If ssh is used, the vxfentsthdw utility should be launched with -s option. Make sure both systems are connected to the same disk during the testing, you can use the vxfenadm -i diskpath command to verify a disk s serial number. See Verifying that Systems See the Same Disk on page 51. For disk arrays with many disks, use the -m option to sample a few disks before creating a disk group and using the -g option to test them all. When testing many disks with the -f or -g option, you can review results by redirecting the command output to a file. The utility indicates a disk can be used for I/O fencing with a message resembling: The disk /dev/sdh is ready to be configured for I/O Fencing on node nebula If the utility does not show a message stating a disk is ready, verification has failed. If the disk you intend to test has existing SCSI-3 registration keys, the test fails to complete. You must remove the keys before testing. See vxfentsthdw Fails When Prior Registration Key Exists on Disk on page VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

85 Verifying Data Storage Arrays Using the vxfentsthdw Utility vxfentsthdw Methods for Testing Storage Devices vxfentsthdw option Description When to Use -m Utility runs manually, in interactive mode, prompting for systems and devices, and reporting success or failure. May be used with -r and -t options. -m is the default option. -f filename Utility tests system/device combinations listed in a text file. May be used with -r and -t options. -g disk_group Utility tests all disk devices in a specified disk group. May be used with -r and -t options. For testing a few disks or for sampling disks in larger arrays. For testing several disks. For testing many disks and arrays of disks. Disk groups may be temporarily created for testing purposes and destroyed (ungrouped) after testing. Using the -r Option for Non-destructive Testing To test disk devices containing data you want to preserve, use the -r option with the -m, -f, or -g options, which are described in the following sections. For example, to use the -m option and the -r option, you can run the utility by entering: # /opt/vrtsvcs/vxfen/bin/vxfentsthdw -rm When invoked with the -r option, the utility does not use tests that write to the disks. Therefore, it does not test the disks for all of the usual conditions of use. Using the -m Option The -m option for vxfentsthdw is described in detail in chapter 3. Using the -f Option: Example Use the -f option to test disks that are listed in a text file. For example, you can create a file to test two disks shared by systems galaxy and nebula that might resemble: galaxy /dev/sdj nebula /dev/sdf galaxy /dev/sdg nebula /dev/sdm Chapter 4, Using I/O Fencing 69

86 Verifying Data Storage Arrays Using the vxfentsthdw Utility In this case the first disk is listed in the first line and is seen by galaxy as /dev/sdj and by nebula as /dev/sdf. The other disk, in the second line, is seen as /dev/sdg from galaxy and /dev/sdm from nebula. Typically, the list of disks could be extensive. Suppose you created the file named disks_blue. To test the disks, you would enter: # /opt/vrtsvcs/vxfen/bin/vxfentsthdw -f disks_blue The utility reports the test results one disk at a time, just as for the -m option. You can redirect the test results to a file. Precede the command with yes to acknowledge that the testing destroys any data on the disks to be tested. Caution Be advised that by redirecting the command s output to file, a warning that the testing destroys the data on the disks cannot be seen until the testing is done. For example: # /opt/vrtsvcs/vxfen/bin/vxfentsthdw -f disks_blue > blue_test.txt Testing a Disk with Existing Keys If the utility detects that a coordinator disk has existing keys, you see a message that resembles: Evaluating the disk before testing... Pre-existing keys. I/O fencing appears to be configured Shutdown I/O Fencing and relaunch vxfentsthdw. 70 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

87 How I/O Fencing Works in Different Event Scenarios How I/O Fencing Works in Different Event Scenarios The following table describes how I/O fencing works to prevent data corruption in different failure event scenarios. For each event, corrective operator actions are indicated. I/O Fencing Scenarios Event Node A: What Happens? Node B: What Happens? Operator Action Both private networks fail. Node A races for majority of coordinator disks. If Node A wins race for coordinator disks, Node A ejects Node B from the shared disks and continues. Node B races for majority of coordinator disks. If Node B loses the race for the coordinator disks, Node B removes itself from the cluster. When Node B is ejected from cluster, repair the private networks before attempting to bring Node B back. Both private networks function again after event above. Node A continues to work. Node B has crashed. It cannot start the database since it is unable to write to the data disks. Reboot Node B after private networks are restored. One private network fails. Node A prints message about an IOFENCE on the console but continues. Node B prints message about an IOFENCE on the console but continues. Repair private network. After network is repaired, both nodes automatically use it. Node A hangs. Node A is extremely busy for some reason or is in the kernel debugger. When Node A is no longer hung or in the kernel debugger, any queued writes to the data disks fail because Node A is ejected. When Node A receives message from GAB about being ejected, it removes itself from the cluster. Node B loses heartbeats with Node A, and races for a majority of coordinator disks. Node B wins race for coordinator disks and ejects Node A from shared data disks. Verify private networks function and reboot Node A. Chapter 4, Using I/O Fencing 71

88 How I/O Fencing Works in Different Event Scenarios Event Node A: What Happens? Node B: What Happens? Operator Action Nodes A and B and private networks lose power. Coordinator and data disks retain power. Power returns to nodes and they reboot, but private networks still have no power. Node A reboots and I/O fencing driver (vxfen) detects Node B is registered with coordinator disks. The driver does not see Node B listed as member of cluster because private networks are down. This causes the I/O fencing device driver to prevent Node A from joining the cluster. Node A console displays: Potentially a preexisting split brain. Dropping out of the cluster. Refer to the user documentation for steps required to clear preexisting split brain. Node B reboots and I/O fencing driver (vxfen) detects Node A is registered with coordinator disks. The driver does not see Node A listed as member of cluster because private networks are down. This causes the I/O fencing device driver to prevent Node B from joining the cluster. Node B console displays: Potentially a preexisting split brain. Dropping out of the cluster. Refer to the user documentation for steps required to clear preexisting split brain. Refer to section in Troubleshooting chapter for instructions on resolving pre-existing split brain condition. 72 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

89 How I/O Fencing Works in Different Event Scenarios Event Node A: What Happens? Node B: What Happens? Operator Action Node A crashes while Node B is down. Node B comes up and Node A is still down. Node A is crashed. Node B reboots and detects Node A is registered with the coordinator disks. The driver does not see Node A listed as member of the cluster. The I/O fencing device driver prints message on console: Potentially a preexisting split brain. Dropping out of the cluster. Refer to the user documentation for steps required to clear preexisting split brain. Refer to section in Troubleshooting chapter for instructions on resolving pre-existing split brain condition. The disk array containing two of the three coordinator disks is powered off. Node A continues to operate as long as no nodes leave the cluster. Node B continues to operate as long as no nodes leave the cluster. Node B leaves the cluster and the disk array is still powered off. Node A races for a majority of coordinator disks. Node A fails because only one of three coordinator disks is available. Node A removes itself from the cluster. Node B leaves the cluster. Power on failed disk array and restart I/O fencing driver to enable Node A to register with all coordinator disks. Chapter 4, Using I/O Fencing 73

90 How I/O Fencing Works in Different Event Scenarios Using the vxfenadm Utility Administrators can use the vxfenadm command to troubleshoot and test fencing configurations. The command s options for use by administrators are: -g - read and display keys -i - read SCSI inquiry information from device -m - register with disks -n - make a reservation with disks -p - remove registrations made by other systems -r - read reservations -x - remove registrations -d - option Registration Key Formatting The key defined by VxVM associated with a disk group consists of seven bytes maximum. This key becomes unique among the systems when the VxVM prefixes it with the ID of the system. The key used for I/O fencing, therefore, consists of eight bytes. Registration Key Formatting 0 7 Node ID VxVM Defined VxVM Defined VxVM Defined VxVM Defined VxVM Defined VxVM Defined VxVM Defined The keys currently assigned to disks can be displayed by using the vxfenadm command. For example, from the system with node ID 1, display the key for the disk /dev/sdk by entering: # vxfenadm -g /dev/sdk Reading SCSI Registration Keys... Device Name: /dev/sdk Total Number of Keys: 1 key[0]: Key Value [Numeric Format]: 65,80,71,82,48,48,48,48 Key Value [Character Format]: APGR0000 The -g option of vxfenadm displays all eight bytes of a key value in two formats. In the numeric format, the first byte, representing the Node ID, contains the system ID plus 65. The remaining bytes contain the ASCII values of the letters of the key, in this case, PGR0000. In the next line, the node ID 0 is expressed as A; node ID 1 would be B. 74 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

91 Troubleshooting Storage Foundation for Oracle RAC 5 This chapter presents various problems and their solutions. Running Scripts for Engineering Support Analysis Three troubleshooting scripts gather information about the configuration and status of your cluster and its modules. The scripts identify package information, debugging messages, console messages, and information about disk groups and volumes. Forwarding the output of these scripts to VERITAS Tech Support can assist with analyzing and solving any problems. getdbac This script gathers information about the VERITAS Storage Foundation 4.1 for Oracle RAC modules. The file /tmp/vcsopslog.time_stamp.sys_name.tar.z contains the script s output. To use getdbac On each system, enter: # /opt/vrtsvcs/bin/getdbac -local getcomms This script gathers information about the GAB and LLT modules. The file /tmp/commslog.time_stamp.tar contains the script s output. To use getcomms On each system, enter: # /opt/vrtsgab/getcomms -local 75

92 Troubleshooting Tips hagetcf This script gathers information about the VCS cluster and the status of resources. The output from this script is placed in a tar file, /tmp/vcsconf.sys_name.tar, on each cluster system. To use hagetcf On each system, enter: # /opt/vrtsvcs/bin/hagetcf Troubleshooting Tips To check the Oracle installation error log Access: $ORACLE_BASE/oraInventory/logs/installActions<date_time>.log This file contains errors that occurred during installation. It clarifies the nature of the error and at exactly which point it occurred during the installation. If there are any installation problems, sending this file to Tech Support is required for debugging the issue. To check the VERITAS log file Access: /var/vrtsvcs/log/engine_a.log This file contains all actions performed by HAD. Verify if there are any CVM or PrivNIC errors logged in this file, since they may prove to be critical errors. Troubleshooting Oracle For help resolving issues with Oracle components, check the: Oracle log files Oracle Notes Oracle Troubleshooting Topics 76 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

93 Troubleshooting Oracle Oracle Log Files To check the Oracle log file Access: $ORA_CRS_HOME/crs/log This file contains the logs pertaining to the CRS resources such as the virtual IP, Listener, and database instances. It indicates some configuration errors or Oracle problems, since CRS does not directly interact with any of the VERITAS components. To check for core dumps Access: $ORA_CRS_HOME/crs/init Core dumps for the crsd.bin daemon are written here. Use this file for further debugging. To check the Oracle css log file Access: $ORA_CRS_HOME/css/log The css logs indicate actions such as reconfigurations, missed checkins, connects, and disconnects from the client CSS listener. If there are membership issues, they will show up here. If there are communication issues over the private networks, they are logged here. The ocssd process interacts with vcsmm for cluster membership. To check for ocssd core dumps Access: $ORA_CRS_HOME/css/init Core dumps from the ocssd and the pid for the css daemon whose death is treated as fatal are located here. If there are abnormal restarts for css the core files are found here. Chapter 5, Troubleshooting Storage Foundation for Oracle RAC 77

94 Troubleshooting Oracle Oracle Notes : CRS and 10g Real Application Clusters Note Oracle Note is extremely important to read : How to install Oracle 10g CRS on a cluster where one or more nodes are not to be configured to run CRS : 10g RAC: Troubleshooting CRS Reboots : How to Restore a Lost Vote Disk in 10g : 10g RAC: How to Clean Up After a Failed CRS Install Two items missing in the above guide are: Remove the /etc/oracle/ocr.loc file. This file contains the location for the Cluster registry. If this file is not removed then during the next installation the installer will not query for the OCR location and will pick it from this file. If there was a previous 9i Oracle installation, then remove the following file: /var/opt/oracle/srvconfig.loc. If this file is present the installer will pick up the Vote disk location from this file and may create the error the Vote disk should be placed on a shared file system even before specifying the Vote disk location : CRS 10g Diagnostic Collection Guide Oracle Troubleshooting Topics Headings indicate likely symptoms or procedures required for a solution. Oracle User Must Be Able to Read /etc/llttab File Check the permissions of the file /etc/llttab. Oracle must be allowed to read it. Error When Starting an Oracle Instance If the VCSMM driver (the membership module) is not configured, an error displays on starting the Oracle instance that resembles: ORA-29702: error occurred in Cluster Group Operation 78 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

95 Troubleshooting Oracle To start the VCSMM driver Enter the following command: # /sbin/service vcsmm start The command included in the /etc/vcsmmtab file enables the VCSMM driver to be started at system boot. Missing Dialog Box During Installation of Oracle9i During installation of Oracle9i Release 2 using the runinstaller utility, if you choose the Enterprise Edition or Custom Install (with RAC option), a dialog box prompting you about the installation nodes appears. Note VERITAS recommends using the installsfrac utility to install Oracle9i RAC. The following troubleshooting steps only apply if the runinstaller utility is called directly. To check if the dialog box fails to appear 1. Exit the runinstaller utility. 2. On the first system, copy the VERITAS CM library: a. Enter: # cd /usr/lib b. Enter: # cp libvcsmm.so $ORACLE_HOME/lib/libcmdll.so 3. Start the VCSMM driver on both the nodes by entering: # /sbin/service vcsmm start 4. Restart the runinstaller utility. Chapter 5, Troubleshooting Storage Foundation for Oracle RAC 79

96 Troubleshooting Oracle Oracle Log Files Show Shutdown The Oracle log files may show that a shutdown was called even when not shutdown manually. The Oracle enterprise agent calls shutdown if monitoring of the Oracle/Netlsnr resources fails for some reason. On all cluster nodes, look at the following VCS and Oracle agent log files for any errors or status: /var/vrtsvcs/log/engine_a.log /var/vrtsvcs/log/oracle_a.log root.sh Hangs after Oracle Binaries Installation This may occur when using OCR on a raw volume if Oracle patch Number is not applied. To prevent root.sh from hanging 1. Install Oracle Database Binaries. 2. Apply Oracle patch Number , available on metalink.oracle.com. In patch search criteria, specify as patch number and Linux Opteron in Platform/Architecture. 3. Execute root.sh. DBCA fails while creating database Verify that the hostname -i command returns the public IP address of the current node. This command is used by the installer and the output is stored in the OCR. If hostname -i returns , it causes the DBCA to fail. CRS processes fail to startup Verify that the correct private IP address is configured on the private link using the PrivNIC agent. Check the CSS log files to learn more. OCR and Vote disk related issues Verify that the permissions are set appropriately as given in the Oracle installation guide. If these files are present from a previous configuration, remove them. See the CRS Fails after Restart. 80 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

97 Troubleshooting Oracle CRS Fails after Restart If the CRS fails to start up after boot up, check for the occurrence of the following strings in the /var/log/ messages file: Oracle CSSD failure. Rebooting for cluster integrity Communication failure occurred and CRS fenced out the node. OCR and Vote disk became unavailable. ocssd was killed manually and on restarting from inittab it rebooted the cluster. Killing the init.cssd script. Waiting for file system containing The CRS installation is on a shared disk and the init script is waiting for that file system to be made available. Oracle Cluster Ready Services disabled by corrupt install The following file is not available or has corrupt entries: /etc/oracle/scls_scr/<hostname>/root/crsstart. OCR initialization failed accessing OCR device The shared file system containing the OCR is not available and CRS is waiting for it to become available. OCRDUMP Executing the $ORA_CRS_HOME/bin/ocrdump creates a OCRDUMPFILE in the working directory. This text file contains a dump of all the parameters stored in the cluster registry, which is useful in case of errors. Check if the following variable occurs in the OCRDUMPFILE: SYSTEM.css.misscount. This variable is the timeout value in seconds that will be used by CRS to fence off the nodes in case of communication failure. If this variable is absent (as it should be, since we remove it from the root.sh during installation), then the fencing timeout value is set to 15 minutes. Chapter 5, Troubleshooting Storage Foundation for Oracle RAC 81

98 Troubleshooting Fencing Troubleshooting Fencing Headings indicate likely symptoms or procedures required for a solution. SCSI Reservation Errors During Bootup When restarting a node of an SFRAC cluster, SCSI reservation errors may be observed such as: Mar 25 13:18:28 galaxy kernel: scsi3 (0,0,6) : RESERVATION CONFLICT This message is printed for each disk that is a member of any shared disk group which is protected by SCSI-3 I/O fencing. The message may be safely ignored. vxfentsthdw Fails When SCSI TEST UNIT READY Command Fails A message may occur resembling: Issuing SCSI TEST UNIT READY to disk reserved by other node FAILED. Contact the storage provider to have the hardware configuration fixed. The disk array does not support returning success for a SCSI TEST UNIT READY command when another host has the disk reserved using SCSI-3 persistent reservations. This happens with Hitachi Data Systems 99XX arrays if bit 186 of the system mode option is not enabled. vxfentsthdw Fails When Prior Registration Key Exists on Disk If you attempt to use the vxfentsthdw utility to test a disk that has a registration key already set, it will fail. If you suspect a key exists on the disk you plan to test, use the vxfenadm -g command to display it. To display a key on a disk Enter: # vxfenadm -g diskname If the disk is not SCSI-3 compliant, an error is returned indicating: Inappropriate ioctl for device. 82 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

99 Troubleshooting Fencing If you have a SCSI-3 compliant disk and no key exists, then the output resembles: Reading SCSI Registration Keys... Device Name: diskname Total Number Of Keys: 0 No keys... Proceed to test the disk using the vxfentsthdw utility. See Testing Disks for Data Storage Using vxfentsthdw on page 52. If keys exist, you must remove them before you test the disk. Refer to Removing Existing Keys From Disks in the next section. Removing Existing Keys from Disks Use this procedure to remove existing registration and keys created by another node from a disk. To remove the registration and reservation keys from disk 1. Create a file to contain the access names of the disks: # vi /tmp/disklist For example: /dev/sdh 2. Read the existing keys: # vxfenadm -g all -f /tmp/disklist The output from this command displays the key: Device Name: /dev/sdh Total Number Of Keys: 1 key[0]: Key Value [Numeric Format]: 65,49,45,45,45,45,45,45 Key Value [Character Format]: A If you know on which node the key was created, log in to that node and enter: # vxfenadm -x -ka1 -f /tmp/disklist The key is removed. If you do not know on which node the key was created, follow step 4 through step 6 to remove the key. Chapter 5, Troubleshooting Storage Foundation for Oracle RAC 83

100 Troubleshooting Fencing 4. Register a second key A2 temporarily with the disk: # vxfenadm -m -ka2 -f /tmp/disklist Registration completed for disk path /dev/sdh 5. Remove the first key from the disk by pre-empting it with the second key: # vxfenadm -p -ka2 -f /tmp/disklist -va1 key: A prempted the key: A on disk /dev/sdh 6. Remove the temporary key assigned in step 4. # vxfenadm -x -ka2 -f /tmp/disklist Deleted the key : [A ] from device /dev/sdk No registration keys exist for the disk. 7. Verify that the keys were properly cleaned: # vxfenadm -g all -f /tmp/disks System Panics to Prevent Potential Data Corruption When a system experiences a split brain condition and is ejected from the cluster, it panics and displays the following console message: VXFEN:vxfen_plat_panic: Local cluster node ejected from cluster to prevent potential data corruption. How vxfen Driver Checks for Pre-existing Split Brain Condition The vxfen driver functions to prevent an ejected node from rejoining the cluster after the failure of the private network links and before the private network links are repaired. For example, suppose the cluster of galaxy and nebula is functioning normally when the private network links are broken. Also suppose galaxy is the ejected system. When galaxy reboots before the private network links are restored, its membership configuration does not show nebula; however, when it attempts to register with the coordinator disks, it discovers nebula is registered with them. Given this conflicting information about nebula, galaxy does not join the cluster and returns an error from vxfenconfig that resembles: vxfenconfig: ERROR: There exists the potential for a preexisting split-brain. The coordinator disks list no nodes which are in the current membership. However, they also list nodes which are not in the current membership. I/O Fencing Disabled! 84 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

101 Troubleshooting Fencing Also, the following information is displayed on the console: <date> <system name> vxfen: WARNING: Potentially a preexisting <date> <system name> split-brain. <date> <system name> Dropping out of cluster. <date> <system name> Refer to user documentation for steps <date> <system name> required to clear preexisting split-brain. <date> <system name> <date> <system name> I/O Fencing DISABLED! <date> <system name> <date> <system name> gab: GAB:20032: Port b closed However, the same error can occur when the private network links are working and both systems go down, galaxy reboots, and nebula fails to come back up. From the view of the cluster from galaxy, nebula may still have the registrations on the coordinator disks. Case 1: nebula Up, galaxy Ejected (Actual Potential Split Brain) To respond to Case 1 1. Determine if galaxy is up or not. 2. If it is up and running, shut it down and repair the private network links to remove the split brain condition. 3. Restart galaxy. Case 2: nebula Down, galaxy Ejected (Apparent Potential Split Brain) To respond to Case 2 1. Physically verify that nebula is down. 2. Verify the systems currently registered with the coordinator disks. Use the following command: # vxfenadm -g all -f /etc/vxfentab The output of this command identifies the keys registered with the coordinator disks. 3. Clear the keys on the coordinator disks as well as the data disks using the command /opt/vrtsvcs/vxfen/bin/vxfenclearpre. See Using vxfenclearpre Command to Clear Keys After Split Brain on page Make any necessary repairs to nebula and reboot. Chapter 5, Troubleshooting Storage Foundation for Oracle RAC 85

102 Troubleshooting Fencing Using vxfenclearpre Command to Clear Keys After Split Brain When you have encountered a split brain condition, use the vxfenclearpre command to remove SCSI-3 registrations and reservations on the coordinator disks as well as on the data disks in all shared disk groups. To use vxfenclearpre 1. To prevent data corruption, shut down all other systems in the cluster that have access to the shared storage. 2. Start the script: # cd /opt/vrtsvcs/vxfen/bin #./vxfenclearpre 3. Read the script s introduction and warning. Then, you can choose to let the script run. Do you still want to continue: [y/n] (default : n) y Informational messages resembling the following may appear on the console of one of the nodes in the cluster when a node is ejected from a disk/lun: scsi1 (0,6,0) : RESERVATION CONFLICT SCSI disk error : host 1 channel 0 id 6 lun 0 return code = 18 I/O error: dev 08:80, sector 2000 These informational messages may be ignored. Cleaning up the coordinator disks... Cleaning up the data disks for all shared disk groups... Successfully removed SCSI-3 persistent registration and reservations from the coordinator disks as well as the shared data disks. Reboot the server to proceed with normal cluster startup... # 4. Restart all systems in the cluster. 86 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

103 Troubleshooting Fencing Adding or Removing Coordinator Disks Use the following procedure to add disks to the coordinator disk group, or to remove disks from the coordinator disk group. Note the following about the procedure: You must have an odd number (three minimum) of disks/luns in the coordinator disk group. The disk you add must support SCSI-3 persistent reservations; see Configuring CVM and CFS Manually (Optional) on page 62. You must reboot each system in the cluster before the changes made to the coordinator disk group take effect. To remove and replace a disk in the coordinator disk group 1. Log in as root user on one of the cluster systems. 2. Import the coordinator disk group. The file /etc/vxfendg includes the name of the disk group (typically, vxfencoorddg) that contains the coordinator disks, so use the command: # vxdg -tfc import cat /etc/vxfendg where: -t specifies that the disk group is imported only until the system restarts. -f specifies that the import is to be done forcibly, which is necessary if one or more disks is not accessible. -C specifies that any import blocks are removed. 3. To add disks to the disk group, or to remove disks from the disk group, use the VxVM disk administrator utility, vxdiskadm. 4. After disks are added or removed, deport the disk group: # vxdg deport cat /etc/vxfendg 5. Execute on all nodes in the cluster: /etc/init.d/vxfen start Chapter 5, Troubleshooting Storage Foundation for Oracle RAC 87

104 Troubleshooting ODM Troubleshooting ODM Headings indicate likely symptoms or procedures required for a solution. File System Configured Incorrectly for ODM Shuts Down Oracle Linking Oracle9i with the VERITAS ODM libraries provides the best file system performance. See To relink VERITAS libraries on page 199 for instructions on creating the link. Shared file systems in RAC clusters without ODM Libraries linked to Oracle9i may exhibit slow performance and are not supported. If ODM cannot find the resources it needs to provide support for cluster file systems, it does not allow Oracle to identify cluster files and causes Oracle to fail at startup. To verify cluster status Run the command: # cat /dev/odm/cluster cluster status: enabled If the status is enabled, ODM is supporting cluster files. Any other cluster status indicates that ODM is not supporting cluster files. Other possible values include: pending failed disabled ODM cannot yet communicate with its peers, but anticipates being able to eventually. ODM cluster support has failed to initialize properly. Check console logs. ODM is not supporting cluster files. If you think it should, check: /dev/odm mount options in /etc/vfstab. If the nocluster option is being used, it can force the disabled cluster support state. Make sure the VRTSgms (group messaging service) package is installed. If /dev/odm is not mounted, no status can be reported. 88 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

105 Troubleshooting VCSIPC To start ODM 1. Execute: # /sbin/service vxgms start 2. Execute: # /sbin/service vxodm start Troubleshooting VCSIPC Headings indicate likely symptoms or procedures required for a solution. VCSIPC Errors in Oracle Trace/Log Files If you see any VCSIPC errors in the Oracle trace/log files, check /var/log/messages for any LMX error messages. If you see messages that contain any of the following:... out of buffers... out of ports... no minors available Refer to Tunable Kernel Driver Parameters on page 375. If you see any VCSIPC warning messages in Oracle trace/log files that resemble: connection invalid or, Reporting communication error with node check whether the Oracle Real Application Cluster instance on the other system is still running or has been restarted. The warning message indicates that the VCSIPC/LMX connection is no longer valid. Chapter 5, Troubleshooting Storage Foundation for Oracle RAC 89

106 Troubleshooting CVM Troubleshooting CVM Headings indicate likely symptoms or procedures required for a solution. Shared Disk Group Cannot Be Imported If you see a message resembling: vxvm:vxconfigd:error:vold_pgr_register(/dev/vx/rdmp/disk_name): local_node_id<0 Please make sure that CVM and vxfen are configured and operating correctly This message is displayed when CVM cannot retrieve the node ID of the local system from the vxfen driver. This usually happens when port b is not configured. To verify vxfen driver is configured Check the GAB ports with the command: # /sbin/gabconfig -a Port b must exist on the local system. Importing Shared Disk Groups The following message may appear when importing shared disk group: VxVM vxdg ERROR V Disk group disk group name: import failed: No valid disk found containing disk group You may need to remove keys written to the disk. Refer to Removing Existing Keys from Disks on page 83. Starting CVM If you cannot start CVM, check the consistency between the /etc/llthosts and main.cf files for node IDs. You may need to remove keys written to the disk. Refer to Removing Existing Keys from Disks on page VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

107 Troubleshooting CVM CVMVolDg Not Online Even Though CVMCluster Is Online When the CVMCluster resource goes online, the shared disk groups are automatically imported. If the disk group import fails for some reason, the CVMVolDg resources fault. Clearing and offlining the CVMVolDg type resources does not fix the problem. To resolve the resource issue 1. Fix the problem causing the import of the shared disk group to fail. 2. Offline the service group containing the resource of type CVMVolDg as well as the service group containing the CVMCluster resource type. 3. Bring the service group containing the CVMCluster resource online. 4. Bring the service group containing the CVMVolDg resource online. Shared Disks Not Visible If the shared disks in /proc/scsi/scsi are not visible: Make sure that all shared LUNs are discovered by the HBA and SCSI layer. This can be verified by looking at /proc/scsi/<fibre channel driver>/* files. Example: /proc/scsi/qla2xxx/2 contains SCSI LUN Information: (Id:Lun) * - indicates lun is not registered with the OS. ( 0: 0): Total reqs 74, Pending reqs 0, flags 0x0, 0:0:84 00 ( 0: 1): Total reqs 0, Pending reqs 0, flags 0x0*, 0:0:84 00 ( 0: 2): Total reqs 0, Pending reqs 0, flags 0x0*, 0:0:84 00 ( 0: 3): Total reqs 0, Pending reqs 0, flags 0x0*, 0:0:84 00 ( 0: 4): Total reqs 0, Pending reqs 0, flags 0x0*, 0:0: The example indicates that not all LUNs are discovered by SCSI. The problem might be fixed by specifying dev_flags or default_dev_flags and max_luns parameters for SCSI driver. RHEL 4.0 Example: /etc/modprobe.conf includes: options scsi_mod dev_flags="hitachi:open-3:0x240" options scsi_mod max_luns=512 Chapter 5, Troubleshooting Storage Foundation for Oracle RAC 91

108 Troubleshooting Interconnects SLES9 Example: /boot/efi/efi/suse/elilo.conf includes: append = "selinux=0 splash=silent elevator=cfq scsi_mod.default_dev_flags=0x240" If the LUNs are not visible in /proc/scsi/fibre channel driver/* files, it may indicate a problem with SAN configuration or zoning. Troubleshooting Interconnects Headings indicate likely symptoms or procedures required for a solution. Restoring Communication Between Host and Disks After Cable Disconnection If a fibre cable is inadvertently disconnected between the host and a disk, you can restore communication between the host and the disk without restarting. To restore lost cable communication between host and disk 1. Reconnect the cable. 2. Use the fdisk -l command to verify that the host sees the disks. It may take a few minutes before the host is capable of seeing the disk. 3. Issue the following vxdctl command to force the VxVM configuration daemon vxconfigd to rescan the disks: # vxdctl enable Network Interfaces Change Their Names After Reboot On SUSE systems, network interfaces change their names after reboot even with HOTPLUG_PCI_QUEUE_NIC_EVENTS=yes and MANDATORY_DEVICES="..." set. Workaround: Use PERSISTENT_NAME= ethx where X is the interface number for all interfaces. 92 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

109 Section II. Using Oracle 10g on Red Hat The chapters in this section assume you have used the overview, planning, and installing chapters to install and configure Storage Foundation for Oracle RAC. See Using Storage Foundation for Oracle RAC on page 1. Procedures for using Oracle 10g with Storage Foundation for Oracle RAC on Red Hat Linux are provided in this section. Chapter 6. Installing Oracle 10g Software on Red Hat on page 95 Chapter 7. Upgrading and Migrating Oracle Software on Red Hat on page 119 Chapter 8. Configuring VCS Service Groups for Oracle 10g on Red Hat on page 137 Chapter 9. Adding and Removing Cluster Nodes for Oracle 10g on Red Hat Linux on page 149 Chapter 10. Uninstalling Storage Foundation for Oracle RAC on Oracle 10g Systems on page

110

111 Installing Oracle 10g Software on Red Hat 6 Use this chapter to install Oracle 10g software on clean systems. You can install Oracle 10g binaries on shared storage or locally on each node. Storage Foundation for Oracle RAC supports Oracle 10g Release 1. For upgrading to Oracle 10g from Oracle9i, see Migrating from Oracle9i to Oracle 10g on page 119. For applying patchsets, see Applying Oracle 10g Patchsets on Red Hat on page 132. For Oracle information, access the Oracle Technology Network. See B , Oracle Real Application Clusters Installation and Configuration Guide. For installing Oracle9i on Red Hat Linux, see Installing Oracle9i Software on Red Hat on page 181. For installing Oracle 10g on SUSE Linux, see Installing Oracle 10g Software on SLES9 on page 265. The examples in this guide primarily show the shared storage installation. The configuration of VCS service groups described in Configuring VCS Service Groups for Oracle 10g on Red Hat on page 137 and the example main.cf file shown in Sample Configuration Files on page 343 are based on using shared storage for the Oracle 10g binaries. Configuring Oracle 10g Release 1 Prerequisites After installing Storage Foundation for Oracle RAC, configure the Oracle 10g prerequisites: Creating OS Oracle User and Group on page 96 Creating CRS_HOME on page 96 Creating Volumes or Directories for OCR and Vote Disk on page 98 Configuring Private IP Addresses on All Cluster Nodes on page 100 Obtaining Public Virtual IP Addresses for Use by Oracle on page

112 Configuring Oracle 10g Release 1 Prerequisites Creating OS Oracle User and Group On each system, create a local group and local user for Oracle. For example, create the group oinstall and the user oracle. Be sure to assign the same group ID, user ID, and home directory for the user on each system. To create OS Oracle user and group on each system 1. Create the oinstall group on each system: # groupadd -g 1000 oinstall # groupadd -g 1001 dba 2. Create the oracle user on each system and the oracle id should resemble the following: useradd -g oinstall -u <User id> -G dba -d /oracle oracle 3. Enable rsh and key-based authentication ssh for the oracle user on all nodes. Creating CRS_HOME On each system in the SFRAC cluster, create a directory for CRS_HOME. The disk space required is 0.5 GB minimum. To create CRS_HOME on each system 1. Log in as root user on one system. # su - root 2. Create groups and users. a. Referring to the Oracle Real Application Clusters Installation and Configuration Guide, create the groups oinstall (the Oracle Inventory group) and dba, and the user oracle, assigning the primary group for oracle to be oinstall and the secondary group for oracle to be dba. Assign a password for oracle user. b. On the original node determine the user and group IDs and use the identical IDs on each of the other nodes. Assign identical passwords for the user oracle. 96 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

113 Configuring Oracle 10g Release 1 Prerequisites 3. On one node, create a disk group. For example: # vxdg init crsdg sdc For shared CRS Home on CVM master: # vxdg -s init crsdg sdc 4. Create the volume in the group for the CRS_HOME. The volume should be a minimum of 0.5 GB: # vxassist -g crsdg make crsvol 500M 5. Start the volume: # vxvol -g crsdg startall 6. Create a VxFS file system on which to install CRS. For example: # mkfs -t vxfs /dev/vx/rdsk/crsdg/crsvol 7. Create the mount point for the CRS_HOME: # mkdir /oracle/crs Note Make sure that CRS_HOME is a subdirectory of ORACLE_BASE. 8. Mount the file system, using the device file for the block device: # mount -t vxfs /dev/vx/dsk/crsdg/crsvol /oracle/crs For shared CRS Home on CVM master: # mount -t vxfs -o /dev/vx/dsk/crsdg/crsvol /oracle/crs 9. For local mount only, edit the /etc/fstab file and list the new file system. For example: /dev/vx/dsk/crsdg/crsvol /oracle/crs vxfs defaults Set the CRS_HOME directory for the oracle user as /oracle/crs. 11. Assign ownership of the directory to oracle and the group oinstall: # chown -R oracle:oinstall /oracle/crs 12. On each cluster, repeat step 1 through step 11. For shared CRS_HOME repeat step 7 through step 8. Chapter 6, Installing Oracle 10g Software on Red Hat 97

114 Configuring Oracle 10g Release 1 Prerequisites Creating Volumes or Directories for OCR and Vote Disk The OCR and Vote disk must be shared among all nodes in a cluster. Whether you create shared volumes or shared file system directories, you can add them in the VCS configuration to make them highly available. The ORACLE_BASE directory contains CRS_HOME and ORACLE_HOME. Create OCR and Vote disk directories in a cluster file system, or create OCR and Vote disk shared raw volumes. VERITAS recommends creating directories in a file system for the OCR and Vote disk directories instead of creating separate OCR and Vote disk volumes. To create OCR and Vote disk directories on CFS 1. Log in as root user. # su - root 2. On the master node, create a shared disk group: # vxdg -s init ocrdg sdz 3. Create volume in the shared group for OCR and Vote disk: # vxassist -g ocrdg make ocrvol 200M 4. Assign ownership of the volumes using the vxedit command: # vxedit -g disk_group set group=group user=user mode=660 ocrvol 5. Start the volume: # vxvol -g ocrdg startall 6. Create the file system: # mkfs -t vxfs /dev/vx/rdsk/ocrdg/ocrvol 7. Create a mount point for the Vote disk and OCR files on all the nodes. # mkdir /ora_crs 8. On all nodes, mount the file system: # mount -t vxfs -o cluster /dev/vx/dsk/ocrdg/ocrvol /ora_crs 98 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

115 Configuring Oracle 10g Release 1 Prerequisites 9. Create a directory for the Vote disk and OCR files. # cd /ora_crs When installing CRS, specify the following for OCR and Vote disk respectively: /ora_crs/ocr /ora_crs/vote 10. Set oracle to be the owner of the file system, and set 0660 as the permissions: # chown -R oracle:oinstall /ora_crs # chmod 755 /ora_crs To create OCR and Vote disk on raw volumes 1. Log in as root user. 2. On the CVM master node, create a shared disk group: # vxdg -s init ocrdg sdz 3. Create volumes in the shared group for OCR and Vote disk: # vxassist -g ocrdg make ocrvol 100M # vxassist -g ocrdg make vdvol 100M 4. Assign ownership of the volumes using the vxedit command: # vxedit -g ocrdg set user=oracle group=oinstall mode=660 ocrvol # vxedit -g ocrdg set user=oracle group=oinstall mode=660 vdvol 5. Start the volume: # vxvol -g ocrdg startall 6. When installing CRS, specify the following for OCR and Vote disk: OCR: VD: /dev/vx/rdsk/ocrdg/ocrvol /dev/vx/rdsk/ocrdg/vdvol Chapter 6, Installing Oracle 10g Software on Red Hat 99

116 Configuring Oracle 10g Release 1 Prerequisites Configuring Private IP Addresses on All Cluster Nodes The CRS daemon requires a private IP address on each system to enable communications and heartbeating. Do the following to set up the private IP addresses. To configure private IP addresses on all cluster nodes 1. On each cluster system, determine a private NIC device for which LLT is configured. Look at the file /etc/llttab. For example, if eth0 is used as an LLT interconnect on one system, you can configure an available IP address for it. Example commands: # ifconfig eth0 down # ifconfig eth0 inet netmask # ifconfig eth0 up Configure one private NIC on each node. Note The private IP addresses of all nodes should be on the same physical network in the same IP subnet. 2. On each system, add the configured private IP addresses of all nodes to the /etc/hosts file, mapping them to symbolic names. Example: galaxy_priv nebula_priv 3. From each system, try pinging each of the other nodes, using the symbolic system name associated with the private NIC IP address. After configuring the IP addresses, you can edit the CVM service group and add the PrivNIC resource to make the IP addresses highly available. See Configuring CVM Service Group for Oracle 10g Manually on page 143, and Configuring the PrivNIC Agent on page 369. Obtaining Public Virtual IP Addresses for Use by Oracle Before starting the Oracle installation, you must create virtual IP addresses for each node. An IP address and an associated host name should be registered in the domain name service (DNS) for each public network interface. 100 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

117 Installing Oracle 10g CRS and Database To obtain public virtual IP addresses for use by Oracle 1. Obtain one virtual IP per node. 2. Add entry for the virtual IP and virtual public name in the /etc/hosts file, for all nodes. 3. Register with DNS. Example: galaxy_pub nebula_pub Installing Oracle 10g CRS and Database Installing the Oracle 10g software on shared storage in a RAC environment requires you to create volumes and directories on CFS. Three options for installing are supported: Installing Oracle 10g Release 1 on a Shared Disk on page 101 Installing Oracle 10g Release 1 Locally on page 107 Installing Oracle 10g Release 1 Manually on page 113 Installing Oracle 10g Release 1 on a Shared Disk To prepare to install Oracle 10g Release 1 on a shared disk 1. Log into any system of the cluster as the root user. # su - root 2. On the master node create a shared disk group. a. Enter: # vxdg -s init orabinvol_dg sdd b. Create the volume in the shared group: # vxassist -g orabinvol_dg make orabinvol 3000M For the Oracle 10g binaries, make the volume 3 GB. Chapter 6, Installing Oracle 10g Software on Red Hat 101

118 Installing Oracle 10g CRS and Database c. Start the volume: # vxvol -g orabinvol_dg startall d. On the master node, create a VxFS file system on the shared volume on which to install the Oracle 10g binaries. For example, create the file system on orabinvol: # mkfs -t vxfs /dev/vx/dsk/orabinvol_dg/orabinvol 3. On each system, create the mount point for Oracle binaries and mount the file system. a. Create the mount point for the Oracle binaries if it does not already exist: # mkdir /oracle/10g b. Mount the file system, using the device file for the block device: # mount -t vxfs -o cluster /dev/vx/dsk/orabinvol_dg/orabinvol /oracle/10g 4. From the CVM master, execute: # vxedit -g orabinvol_dg set user=oracle group=oinstall mode=660 orabinvol 5. Set oracle to be the owner of the file system, and set 0660 as the permissions: # chown oracle:oinstall /oracle To install Oracle 10g Release 1 on a shared disk 1. Make sure that the Oracle installer is in a directory that is writable. If you are using the CD-ROM, make sure that the Oracle installation files are copied locally. 2. On the first system, set the following variables in root s environment on the node from which installsfrac -configure will be executed. a. For Bourne shell (bash, sh, or ksh), enter: # export ORACLE_BASE=/oracle # export DISPLAY=host:0.0 b. For the C Shell (csh or tcsh): # setenv ORACLE_BASE /oracle # setenv DISPLAY host: VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

119 Installing Oracle 10g CRS and Database 3. If installing oracle or lower, set the environment variable OUI_ARGS to -ignoresysprereqs before launching the installsfrac -configure script. Caution The Oracle or earlier OUI does not understand RHEL4 as a supported OS and will exit with an error if the environment variable OUI_ARGS is not set to -ignoresysprereqs before running the installsfrac -configure script. Example for bash: # export OUI_ARGS="-ignoresysprereqs" Example for C Shell: # setenv OUI_ARGS "-ignoresysprereqs" 4. Set the X server access control: # xhost + <hostname or ip address> Where <hostname or ip address> is the hostname or IP address of the server to which you are displaying. Example: # xhost being added to access control list Note By default, the installsfrac utility uses ssh for remote communication. However, rsh can be used in place of ssh by using the -usersh option with the installsfrac utility. The installation of Oracle 10g requires that rsh be configured on all nodes. Refer to the Oracle 10g documentation for details on configuring rsh. 5. On the same node where you have set the environment variables, execute the following command as root: # cd /opt/vrts/install #./installsfrac configure The installer will display the copyright message. Chapter 6, Installing Oracle 10g Software on Red Hat 103

120 Installing Oracle 10g CRS and Database 6. When the installer prompts, enter the system names separated by spaces on which to configure Storage Foundation for Oracle RAC. For the installation example used in this procedure: # galaxy nebula The installer checks both systems for communication and creates a log directory on the second system in /var/tmp/installsfracxxxxxxxxxx, where xxxx is the timestamp. 7. When the initial system check is successfully completed, press Enter to continue. 8. The installer proceeds to verify the license keys. a. Enter additional licenses at this time if any are needed. b. When the licenses are successfully verified, press Enter to continue. 9. The installer presents task choices for installing and configuring, depending on the operating system you are running. Example: This program enables you to perform one of the following tasks: 1) Install Oracle 9i. 2) Install Oracle 10g. 3) Relink SFRAC for Oracle 9i. 4) Relink SFRAC for Oracle 10g. 5) Configure different components of SFRAC Enter your choice [1-5]: [?] Select Install Oracle 10g. The installer proceeds to check environment settings. 10. Set Oracle directories. a. When prompted, enter the directory name for CRS_HOME relative to the ORACLE_BASE directory. b. When prompted, enter the directory name for ORACLE_HOME relative to the ORACLE_BASE directory. The Installer proceeds to validate ORACLE_HOME and check the node list. c. Press Enter to continue. 104 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

121 Installing Oracle 10g CRS and Database 11. Configure user accounts. a. Enter Oracle Unix User Account when prompted. The installer checks for the user on all systems. b. Enter Oracle Inventory group when prompted. The installer checks for group existence on all systems c. Press Enter to continue. 12. Enter the Oracle installer path for CRS when prompted. Specify the disk in your Oracle media kit where the CRS binaries reside. The installer validates the CRS installer. Example: /<CRS_Disk>/Disk1 In the example, <CRS_Disk> is the disk where the CRS binaries reside. 13. Enter the oracle installer path when prompted. The installer validates the Oracle Installer, copies files, creates directories based on the information provided, and invokes the Oracle CRS Installer. Example: /DB_Disk/Disk1/ In the example,<db_disk> is the disk where the Oracle binaries reside. When the Oracle CRS Installer appears, it prompts for the following: a. Specify the file source and destination and click Next. b. Specify the name for the install and CRS_HOME and click Next. c. Select the language and click Next. d. The host names of clustered machines display. Specify private names before them for the nodes in the cluster and click Next. e. Specify the Private Interconnect and click Next. f. Specify the OCR file name with an absolute path, for example the /ora_crs/ocr/ocr file, and click Next. Chapter 6, Installing Oracle 10g Software on Red Hat 105

122 Installing Oracle 10g CRS and Database g. Specify CSS (Vote disk) file name with an absolute path, for example the /ora_crs/css/css file, and click Next. The installer proceeds with the CRS installation and sets the CRS parameters. h. When prompted at the end of the CRS installation, run the $CRS_HOME/root.sh file on each cluster node. i. Exit the CRS Installer after running root.sh and continue with installsfrac -configure for the Oracle 10g binaries installation. 14. Press Enter to continue. The Oracle 10g database installer window displays. a. Specify the file locations and click Next. b. Select all nodes in the cluster and click Next. c. Choose the installation type. The installer verifies that the requirements are all met. d. Provide OCSDBA and OSOPER group for UNIX Oracle users when prompted. e. When prompted by the Create Database option, select No. f. Install the binaries now. g. The installer prompts you to run $ORACLE_HOME/root.sh on each node. The VIPCA utility displays on the first node only. Export LD_ASSUME_KERNEL= See Oracle Metalink Doc id Export DISPLAY=<ip>:0 Execute ORACLE_HOME/root.sh. Select the public NIC for all the systems. Enter the Virtual IP addresses for the public NIC. h. The installer confirms when installation is successful. Exit the Oracle 10g Installer and return to installsfrac -configure. 106 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

123 Installing Oracle 10g CRS and Database 15. In installsfrac -configure, press Enter to continue. The success of the configuration is reported. The configuration summary is saved at: /opt/vrts/install/logs/installsfracxxxxxxxxxx.summary The installsfrac log is saved at: /opt/vrts/install/logs/installsfracxxxxxxxxxx.log 16. After successful installation of CRS and Oracle 10g, a RAC database can be configured. Use your own tools or see Creating a Starter Database on page 351. Bring CVM and Private NIC under VCS control. This can be achieved by: Using the VCS Oracle RAC Configuration wizard. See Creating an Oracle Service Group on page 137 Manually editing the VCS configuration file. See Configuring CVM Service Group for Oracle 10g Manually on page 143, and Configuring the PrivNIC Agent on page 369. If the nodes are rebooted before configuring the CVM service group, the services will not start on their own. Installing Oracle 10g Release 1 Locally Use this procedure to install Oracle 10g on each system locally in a VERITAS Storage Foundation 4.1 for Oracle RAC environment. To prepare to install Oracle 10g Release 1 locally 1. Log in as root user on one system. 2. On one node, create a disk group. a. Enter: # vxdg init or_dg sdz b. Create the volume in the group: # vxassist -g or_dg make or_vol 5000M For the Oracle 10g binaries, make the volume 5,000 MB. c. Start the volume: # vxvol -g or_dg startall d. Create a VxFS file system on orabinvol to install the Oracle 10g binaries. For example: Chapter 6, Installing Oracle 10g Software on Red Hat 107

124 Installing Oracle 10g CRS and Database # mkfs -t vxfs /dev/vx/dsk/or_dg/or_vol 3. Create the mount point for the file system. a. Enter: # mkdir /oracle b. Mount the file system, using the device file for the block device: # mount -t vxfs /dev/vx/dsk/or_dg/or_vol /oracle c. To mount the file system automatically across reboot, edit the /etc/fstab file, and add the new file system. For example: /dev/vx/dsk/or_dg/or_vol /oracle vxfs defaults Create a local group and a local user for Oracle. For example, create the group oinstall and the user oracle. Be sure to assign the same user ID and group ID for the user on each system. 5. Set the home directory for the oracle user as /oracle. 6. From the CVM master, execute: # vxedit -g orabinvol_dg set user=oracle group=oinstall mode=660 orabinvol 7. Repeat step 1 through step 6 on the other systems. 8. Set the X server access control: # xhost + <hostname or ip address> Where <hostname or ip address> is the hostname or IP address of the server to which you are displaying. Example: # xhost being added to access control list 108 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

125 Installing Oracle 10g CRS and Database 9. On the first system, set the following variables in root s environment on the node from which installsfrac -configure will be executed. a. For Bourne shell (bash, sh, or ksh), enter: # export ORACLE_BASE=/oracle # export DISPLAY=host:0.0 b. For the C Shell (csh or tcsh): # setenv ORACLE_BASE /oracle # setenv DISPLAY host: If installing oracle or lower, set the environment variable OUI_ARGS to -ignoresysprereqs before launching the installsfrac -configure script. Caution The Oracle or earlier OUI does not understand RHEL4 as a supported O.S. and will exit with an error if the environment variable OUI_ARGS is not set to -ignoresysprereqs before running the installsfrac -configure script. Example for bash: # export OUI_ARGS="-ignoresysprereqs" Example for C Shell: # setenv OUI_ARGS "-ignoresysprereqs" To install Oracle 10g Release 1 locally 1. Ensure that the Oracle installer is in a directory that is writable. If you are using the CD-ROM, ensure that the Oracle installation files are copied locally. Note By default, the installsfrac utility uses ssh for remote communication. However, rsh can be used in place of ssh by using the -usersh option with the installsfrac utility. 2. On the same node where you have set the environment variables, execute the following command as root: # cd /opt/vrts/install #./installsfrac configure The installer will display the copyright message. Chapter 6, Installing Oracle 10g Software on Red Hat 109

126 Installing Oracle 10g CRS and Database 3. When the installer prompts, enter the system names separated by spaces on which to configure Storage Foundation for Oracle RAC. For the installation example used in this procedure: galaxy nebula The installer checks both systems for communication and creates a log directory on the second system in /var/tmp/installsfracxxxxxxxxxx, where xxxx represents the timestamp. 4. When the initial system check is successfully completed, press Enter to continue. 5. The installer proceeds to verify the license keys. a. Enter additional licenses at this time if any are needed. b. When the licenses are successfully verified, press Enter to continue. 6. The installer presents task choices for installing and configuring, depending on the operating system you are running. Example: This program enables you to perform one of the following tasks: 1) Install Oracle 9i. 2) Install Oracle 10g. 3) Relink SFRAC for Oracle 9i. 4) Relink SFRAC for Oracle 10g. 5) Configure different components of SFRAC Enter your choice [1-5]: [?] Select Install Oracle 10g. The installer proceeds to check environment settings. 7. Configure the Oracle directories: a. When prompted, enter the directory name for CRS_HOME relative to the ORACLE_BASE directory. b. When prompted, enter the directory name for ORACLE_HOME relative to the ORACLE_BASE directory. The Installer proceeds to validate ORACLE_HOME and check the node list. c. Press Enter to continue. 110 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

127 Installing Oracle 10g CRS and Database 8. Set user accounts. a. Enter Oracle Unix User Account when prompted. The installer checks for the user on all systems. b. Enter Oracle Inventory group when prompted. The installer checks for group existence on all systems c. Press Enter to continue. 9. Enter the oracle installer path for CRS when prompted. The installer validates the CRS installer. 10. Enter the oracle installer path when prompted. The installer validates the Oracle Installer, copies files, creates directories based on the information provided, and invokes the Oracle CRS Installer. When the Oracle CRS Installer appears, it prompts for the following: a. Specify the file source and destination and click Next. b. Run orainstroot.sh on all nodes. If it does not exist, copy it to all nodes. c. Specify the name for the install and CRS_HOME and click Next. d. Select the language and click Next. e. Specify the host names and private names for the nodes in the cluster and click Next. f. Specify the Private Interconnect and click Next. g. Specify the OCR file name with an absolute path. For example: /ora_crs/ocr_css/ocr file and click Next. h. Specify CSS (Vote disk) file name with an absolute path. For example: /ora_crs/ocr_css/css file and click Next. The installer proceeds with the CRS installation and sets the CRS parameters. i. When prompted at the end of the CRS installation, run the $CRS-HOME/root.sh file on each cluster node. j. Exit the CRS Installer after running root.sh and continue with installsfrac -configure for the Oracle 10g binaries installation. Chapter 6, Installing Oracle 10g Software on Red Hat 111

128 Installing Oracle 10g CRS and Database 11. Press Enter to continue. The Oracle 10g database installer window displays. a. Specify the file locations and click Next. b. Select all nodes in the cluster and click Next. c. Choose the installation type. The installer verifies that the requirements are all met. d. Provide OCSDBA and OSOPER group for UNIX Oracle users when prompted. e. When prompted to create the database, select the No option. f. Install the binaries now. g. The installer prompts you to run $ORACLE_HOME/root.sh on each node. The VIPCA utility displays on the first node only. Export LD_ASSUME_KERNEL= See Oracle Metalink Doc id Export DISPLAY=<ip>:0 Execute ORACLE_HOME/root.sh. Select the public NIC for all the systems. Enter the Virtual IP addresses for the public NIC. h. The installer confirms when installation is successful. Exit the Oracle 10g Installer and return to installsfrac -configure. 12. In installsfrac -configure, press Enter to continue. The success of the configuration is reported. The configuration summary is saved at: /opt/vrts/install/logs/installsfracxxxxxxxxxx.summary The installsfrac log is saved at: /opt/vrts/install/logs/installsfracxxxxxxxxxx.log 13. After successful installation of CRS and Oracle 10g, a RAC database can be configured. Use your own tools or see Creating a Starter Database on page VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

129 Installing Oracle 10g CRS and Database 14. Bring CVM and Private NIC under VCS control. This can be achieved by: Using the VCS Oracle RAC Configuration wizard, see Creating an Oracle Service Group on page 137 Manually editing the VCS configuration file, see Configuring CVM Service Group for Oracle 10g Manually on page 143. If the nodes are rebooted before configuring the CVM service group, the services will not start on their own. Installing Oracle 10g Release 1 Manually Typically, VERITAS recommends using installsfrac -configure to install the Oracle 10g RAC binaries. However, some situations may require manual installation of the Oracle 10g RAC binaries. The following steps are required to install Oracle 10g manually: Patching the CRS OUI Pre-installation tasks OUI-based installation for CRS OUI-based installation for the database Post installation relinking Post installation configuration To patch the CRS OUI 1. Log in as oracle. # su - oracle The Oracle CRS installer must be patched so that it will detect the presence of VCS and use the correct cluster membership. If the OUI has been patched previously using this procedure, then proceed to the next section. 2. Search for the clusterqueries.jar file inside the CRS OUI directory structure. With the current release of OUI, it may be at the following location: # cp <installer_directory>/disk1/stage/queries/clusterqueries/ /1 /tmp/jar 3. Unzip this file in some temporary location such as/tmp/jar. This directory should contain coe.tar under /tmp/jar/bin/linux/ directory. # cp /tmp/jar/bin/linux/coe.tar /tmp/tar Chapter 6, Installing Oracle 10g Software on Red Hat 113

130 Installing Oracle 10g CRS and Database 4. Extract this file at another temporary location such as/tmp/tar. # tar -xvf /tmp/tar/coe.tar 5. Backup the original lsnodes.sh script. # cp lsnodes.sh lsnodes.sh.orig 6. Patch the lsnodes.sh file as follows which will create a new file lsnodes_new.sh: # cat lsnodes.sh sed -e '/CM / i #Patch to Check if something is present in central location \ if [ -d /opt/orclcluster/lib ]; then \ CL="/opt/ORCLcluster/lib" \ export LD_LIBRARY_PATH=\$CL:\$LD_LIBRARY_PATH \ cd $base \ ret=`./lsnodes` \ if [ $? = 0 ]; then \ echo "CL"; \ exit; \ fi \ fi \ ' > lsnodes_new.sh The lsnodes.sh file determines whether to use existing cluster membership or ask the user to input a new set of nodes that will form the cluster. This script will acknowledge existence of a cluster only if Oracle's Cluster Manager (oracm) is running. The above patch changes this behavior such that the lsnodes.sh script will acknowledge a presence of cluster if /opt/orclcluster/lib directory is present and lsnodes.sh executes correctly. 7. Overwrite the existing lsnodes.sh by the new lsnodes_new.sh file. # cp lsnodes_new.sh lsnodes.sh 8. Ensure that lsnodes.sh file has permissions set to 755. # chmod 755 lsnodes.sh 9. Backup the lsnodes_get.sh file # cp lsnodes_get.sh lsnodes_get.sh.orig 114 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

131 Installing Oracle 10g CRS and Database 10. Patch the lsnodes_get.sh as follows which will create a new file lsnodes_get_new.sh: # cat lsnodes_get.sh sed -e ' s_cl="/etc/orclcluster/oracm/lib"_\ if [ -d /opt/orclcluster/lib ]; then \ CL="/opt/ORCLcluster/lib" \ else \ CL="/etc/ORCLcluster/oracm/lib" \ fi _ '> lsnodes_get_new.sh The lsnodes_get.sh script is the one that actually queries lsnodes command for the cluster members. This patch sets the CL variable to /opt/orclcluster/lib which is exported as the LD_LIBRARY_PATH value. This allows our vcsmm to be used while determining the cluster membership. 11. Overwrite the existing lsnodes_get.sh by the new lsnodes_get_new.sh file. # cp lsnodes_get_new.sh lsnodes_get.sh 12. Ensure that lsnodes_get.sh file has permissions set to 755. # chmod 755 lsnodes_get.sh 13. Delete lsnodes_new.sh and lsnodes_get_new.sh files. # rm -f lsnodes_new.sh lsnodes_get_new.sh 14. Re-create the coe.tar from the /tmp/tar location as follows: # tar cvf /tmp/coe.tar -C /tmp/tar/. 15. Overwrite the old coe.tar present under /tmp/jar/bin/linux/ directory. 16. Create the jar file as follows: # jar -cmf /tmp/clusterqueries.jar -C /tmp/jar. Make sure that the -M option is passed while creating the jar. The jar files do not require manifest file. 17. Copy this patched jar file back into the OUI location. Chapter 6, Installing Oracle 10g Software on Red Hat 115

132 Installing Oracle 10g CRS and Database To complete the pre-installation tasks Execute procedure as Oracle user. 1. Use the steps in Configuring Oracle 10g Release 1 Prerequisites to set up the required mounts (Oracle home, CRS home and CRS, Vote disk) and other pre-requisites for Oracle 10g: a. Creating OS Oracle User and Group on page 96 b. Creating CRS_HOME on page 96 c. Creating Volumes or Directories for OCR and Vote Disk on page 98 d. Configuring Private IP Addresses on All Cluster Nodes on page 100 e. Obtaining Public Virtual IP Addresses for Use by Oracle on page Create /opt/orclcluster/lib directory on all cluster nodes. # mkdir /opt/orclcluster/lib 3. Copy /usr/lib/libvcsmm.so to /opt/orclcluster/lib directory as libskgxn2.so on all cluster nodes. # cp /usr/lib/libvcsmm.so to /opt/orclcluster/lib 4. Export ORACLE_BASE and ORACLE_HOME variables. To use OUI-based Installation for CRS Execute the./runinstaller and follow the instructions as given in Oracle s 10g Installation guide. Note If the cluster node names are not displayed on the Specify the host names page of the OUI, verify that the CRS OUI has been patched, the pre-installation steps have been completed correctly, and vcsmm is configured on that cluster. To install OUI-based Installation for the Database Install the 10g database using the OUI. No special steps are needed for an Oracle 10g database installation. 116 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

133 Installing Oracle 10g CRS and Database To perform post-install re-linking Perform the following steps as oracle user on all of the cluster nodes: # cp $ORACLE_HOME/lib/libskgxp10.so $ORACLE_HOME/lib/libskgxp10.so.orig # cp /usr/lib/libskgxp10.so $ORACLE_HOME/lib/libskgxp10.so # cp $ORACLE_HOME/lib/libskgxn2.so $ORACLE_HOME/lib/libskgxn2.so.orig # cp /usr/lib/libvcsmm.so $ORACLE_HOME/lib/libskgxn2.so # cp $ORACLE_HOME/lib/libodm10.so $ORACLE_HOME/lib/libodm10.so.orig # cp /usr/lib/libodm10.so $ORACLE_HOME/lib/libodm10.so # cp $CRS_HOME/lib/libskgxn2.so $CRS_HOME/lib/libskgxn2.so.orig # cp /usr/lib/libvcsmm.so $CRS_HOME/lib/libskgxn2.so Note If libskgxp10.so is present under $CRS_HOME/lib, then it needs to be replaced also for These steps must be repeated on all cluster nodes and must be done as oracle user. The libraries must not be copied as root: doing so will break Oracle. To complete post-install configuration 1. Log in as root. # su - root 2. Execute on any one node of the cluster: $ $CRS_HOME/bin/crsctl set css misscount Back up the init.cssd file on all the nodes. # cp /etc/init.d/init.cssd /etc/init.d/init.cssd.orig 4. The /etc/init.d/init.cssd script has to patched on all of the cluster nodes as follows: $ cat /etc/init.d/init.cssd sed -e '/Code to/i\ #Veritas Cluster server\ if [ -d "/opt/vrtsvcs" ]\ then \ VENDOR_CLUSTER_DAEMON_CHECK="/opt/VRTSvcs/ops/bin/checkvcs"\ CLINFO="/opt/VRTSvcs/ops/bin/clsinfo\"\ fi '> /tmp/init.cssd 5. Copy the modified init.cssd script to each node after backing up the existing init.cssd file. Chapter 6, Installing Oracle 10g Software on Red Hat 117

134 Installing Oracle 10g CRS and Database 6. Stop CRS on all nodes: # /etc/init.d/init/init.crs stop 7. Verify that CRS is stopped on all nodes: # ps -ef grep ocssd 8. After successful installation of CRS and Oracle 10g, a RAC database can be configured. Use your own tools or see Creating a Starter Database on page Bring CVM and Private NIC under VCS control. This can be achieved by: Using the VCS Oracle RAC Configuration wizard. See Creating an Oracle Service Group on page 137 Manually editing the VCS configuration file. See Configuring CVM Service Group for Oracle 10g Manually on page 143, and Configuring the PrivNIC Agent on page 369. If the nodes are rebooted before configuring the CVM service group, the services will not start on their own. 118 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

135 Upgrading and Migrating Oracle Software on Red Hat 7 Use this chapter for upgrading Oracle software: Migrating from Oracle9i to Oracle 10g on page 119 Applying Oracle 10g Patchsets on Red Hat on page 132 The configuration of VCS service groups described in Configuring VCS Service Groups for Oracle 9i on Red Hat on page 203, Configuring VCS Service Groups for Oracle 10g on Red Hat on page 137, and the example main.cf file shown in Sample Configuration Files on page 343 are based on using shared storage for the Oracle binaries. Migrating from Oracle9i to Oracle 10g The migration procedure assumes Oracle9i is installed. Tasks for migration: Install Storage Foundation for Oracle RAC 4.1 if it is not already installed. Configuring Oracle 10g Prerequisites on CFS on page 120. For local installation, see Installing Oracle 10g Release 1 Locally on page 107. Installing Oracle 10g on a Shared Disk on page 124 Preparing for Migration from Oracle9i to Oracle 10g on page 130 Migrating Oracle9i to Oracle 10g on page 130 Relinking after Migrating from Oracle9i to Oracle 10g on page

136 Migrating from Oracle9i to Oracle 10g Configuring Oracle 10g Prerequisites on CFS Perform the following configurations after upgrading to Storage Foundation for Oracle RAC 4.1: Creating Directories Configuring Private IP Addresses on All Cluster Nodes Obtaining Public Virtual IP Addresses for Use by Oracle Creating Directories On each system in the SFRAC cluster, create a directory for CRS_HOME. The disk space required is 0.5 GB minimum. Create a CRS_HOME on a separate shared file system from the Oracle9i and Oracle 10g ORACLE_HOME directories. To create a CRS_HOME 1. Log in as root user on one system. # su - root 2. On one node, create a disk group. For example: # vxdg init crsdg sdc For shared CRS Home on CVM master: # vxdg -s init crsdg sdc 3. Create the volume in the group for the CRS_HOME. The volume should be a minimum of 0.5 GB: # vxassist -g crsdg make crsvol 500M 4. Start the volume: # vxvol -g crsdg startall 5. Create a VxFS file system on which to install CRS. For example: # mkfs -t vxfs /dev/vx/rdsk/crsdg/crsvol 120 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

137 Migrating from Oracle9i to Oracle 10g 6. Create the mount point for the CRS_HOME: # mkdir /oracle/crs Note Make sure that CRS_HOME is a subdirectory of ORACLE_BASE. 7. Mount the file system, using the device file for the block device: # mount -t vxfs /dev/vx/dsk/crsdg/crsvol /oracle/crs For shared CRS Home on CVM master: # mount -t vxfs -o /dev/vx/dsk/crsdg/crsvol /oracle/crs 8. For local mount only, edit the /etc/fstab file and list the new file system. For example: /dev/vx/dsk/crsdg/crsvol /oracle/crs vxfs defaults Set the CRS_HOME directory for the oracle user as /oracle/crs. 10. Assign ownership of the directory to oracle and the group oinstall: # chown -R oracle:oinstall /oracle/crs 11. Repeat step 6 through step 7. Creating Volumes or Directories for OCR and Vote disk The OCR and Vote disk must be shared among all nodes in a cluster. Whether you create shared volumes or shared file system directories, you can add them in the VCS configuration to make them highly available. The ORACLE_BASE directory contains CRS_HOME and ORACLE_HOME. Create OCR and Vote disk directories in a cluster file system, or create OCR and Vote disk shared raw volumes. VERITAS recommends creating directories in a file system for the OCR and Vote disk directories instead of creating separate OCR and Vote disk volumes. To create OCR and Vote disk directories on CFS 1. Log in as root user. # su - root 2. On the master node, create a shared disk group: # vxdg -s init ocrdg sdz Chapter 7, Upgrading and Migrating Oracle Software on Red Hat 121

138 Migrating from Oracle9i to Oracle 10g 3. Create volume in the shared group for OCR and Vote disk: # vxassist -g ocrdg make ocrvol 200M 4. Assign ownership of the volumes using the vxedit command: # vxedit -g ocrdg set user=oracle group=oinstall mode=660 ocrvol # vxedit -g ocrdg set user=oracle group=oinstall mode=660 vdvol 5. Start the volume: # vxvol -g ocrdg startall 6. Create the file system: # mkfs -t vxfs /dev/vx/rdsk/ocrdg/ocrvol 7. Create a mount point for the Vote disk and OCR files on all the nodes. # mkdir /ora_crs 8. On all nodes, mount the file system: # mount -t vxfs -o cluster /dev/vx/dsk/ocrdg/ocrvol /ora_crs 9. Create a directory for the Vote disk and OCR files. # cd /ora_crs When installing CRS, specify the following for OCR and Vote disk respectively: /ora_crs/ocr /ora_crs/vote 10. Set oracle to be the owner of the file system, and set 0660 as the permissions: # chown -R oracle:oinstall /ora_crs # chmod 755 /ora_crs To create OCR and Vote disk on raw volumes 1. Log in as root user. 2. On the CVM master node, create a shared disk group: # vxdg -s init ocrdg sdz 3. Create volumes in the shared group for OCR and Vote disk: # vxassist -g ocrdg make ocrvol 100M 122 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

139 Migrating from Oracle9i to Oracle 10g # vxassist -g ocrdg make vdvol 100M 4. Assign ownership of the volumes using the vxedit command: vxedit -g disk_group set group=group user=user mode=660 volume ocrvol vxedit -g disk_group set group=group user=user mode=660 volume vdvol 5. Start the volume: # vxvol -g ocrdg startall 6. When installing CRS, specify the following for OCR and Vote disk: OCR: VD: /dev/vx/rdsk/ocrdg/ocrvol /dev/vx/rdsk/ocrdg/vdvol Configuring Private IP Addresses on All Cluster Nodes The CRS daemon requires a private IP address on each system to enable communications and heartbeating. Set up the private IP addresses. To configure private IP addresses on all cluster nodes 1. On each cluster system, determine a private NIC device for which LLT is configured. Look at the file /etc/llttab. For example, if eth0 is used as an LLT interconnect on one system, you can configure an available IP address for it. Example commands: # ifconfig eth0 down # ifconfig eth0 inet netmask # ifconfig eth0 up Configure one private NIC on each node. Note The private IP addresses of all nodes should be on the same physical network in the same IP subnet. 2. On each system, add the configured private IP addresses of all nodes to the /etc/hosts file, mapping them to symbolic names. Example: galaxy_priv nebula_priv Chapter 7, Upgrading and Migrating Oracle Software on Red Hat 123

140 Migrating from Oracle9i to Oracle 10g 3. From each system, try pinging each of the other nodes, using the symbolic system name associated with the private NIC IP address. After configuring the IP addresses, you can edit the CVM service group and add the PrivNIC resource to make the IP addresses highly available. See Configuring CVM Service Group for Oracle 10g Manually on page 143 and Configuring the PrivNIC Agent on page 369. Obtaining Public Virtual IP Addresses for Use by Oracle Before starting the Oracle installation, you must create virtual IP addresses for each node. An IP address and an associated host name are registered in the domain name service (DNS) for each public network interface. To obtain public virtual IP addresses for use by Oracle 1. Obtain one virtual IP per node. 2. Add entry for the virtual IP and virtual public name in the /etc/hosts file, for all nodes. 3. Register with DNS. Example: galaxy_pub nebula_pub Installing Oracle 10g on a Shared Disk Installing the Oracle 10g software on shared storage in a RAC environment requires creating volumes and directories on CFS. To prepare to install Oracle 10g Release 1 on a shared disk 1. Log into any system of the cluster as the root user. # su - root 2. On the master node create a shared disk group. a. Enter: # vxdg -s init orabinvol_dg sdd 124 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

141 Migrating from Oracle9i to Oracle 10g b. Create the volume in the shared group: # vxassist -g orabinvol_dg make orabinvol 3000M For the Oracle 10g binaries, make the volume 3 GB. c. Start the volume: # vxvol -g orabinvol_dg startall d. On the master node, create a VxFS file system on the shared volume on which to install the Oracle 10g binaries. For example, create the file system on orabinvol: # mkfs -t vxfs /dev/vx/dsk/orabinvol_dg/orabinvol 3. On each system, create the mount point for Oracle binaries and mount the file system. a. Create the mount point for the Oracle binaries if it does not already exist: # mkdir /oracle/10g b. Mount the file system, using the device file for the block device: # mount -t vxfs -o cluster /dev/vx/dsk/orabinvol_dg/orabinvol /oracle/10g 4. From the CVM master, execute: # vxedit -g orabinvol_dg set user=oracle group=oinstall mode=660 orabinvol 5. Set oracle to be the owner of the file system, and set 0660 as the permissions: # chown oracle:oinstall /oracle To install Oracle 10g on a shared disk 1. Make sure that the Oracle installer is in a directory that is writable. If you are using the CD-ROM, make sure that the Oracle installation files are copied locally. 2. On the first system, set the following variables in root s environment on the node from which installsfrac -configure will be executed. a. For Bourne shell (bash, sh, or ksh), enter: # export ORACLE_BASE=/oracle # export DISPLAY=host:0.0 Chapter 7, Upgrading and Migrating Oracle Software on Red Hat 125

142 Migrating from Oracle9i to Oracle 10g b. For the C Shell (csh or tcsh): # setenv ORACLE_BASE /oracle # setenv DISPLAY host: If installing oracle or lower, set the environment variable OUI_ARGS to -ignoresysprereqs before launching the installsfrac -configure script. Caution The Oracle or earlier OUI does not understand RHEL4 as a supported OS and will exit with an error if the environment variable OUI_ARGS is not set to -ignoresysprereqs before running the installsfrac -configure script. Example for bash: # export OUI_ARGS="-ignoresysprereqs" Example for C Shell: # setenv OUI_ARGS "-ignoresysprereqs" 4. Set the X server access control: # xhost + <hostname or ip address> Where <hostname or ip address> is the hostname or IP address of the server to which you are displaying. Example: # xhost being added to access control list Note By default, the installsfrac utility uses ssh for remote communication. However, rsh can be used in place of ssh by using the -usersh option with the installsfrac utility. The installation of Oracle 10g requires that rsh be configured on all nodes. Refer to the Oracle 10g documentation for details on configuring rsh. 5. On the same node where you have set the environment variables, execute the following command as root: # cd /opt/vrts/install #./installsfrac configure The installer will display the copyright message. 126 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

143 Migrating from Oracle9i to Oracle 10g 6. When the installer prompts, enter the system names separated by spaces on which to configure Storage Foundation for Oracle RAC. For the installation example used in this procedure: # galaxy nebula The installer checks both systems for communication and creates a log directory on the second system in /var/tmp/installsfracxxxxxxxxxx. 7. When the initial system check is successfully completed, press Enter to continue. 8. The installer proceeds to verify the license keys. a. Enter additional licenses at this time if any are needed. b. When the licenses are successfully verified, press Enter to continue. 9. The installer presents task choices for installing and configuring, depending on the operating system you are running. Example: This program enables you to perform one of the following tasks: 1) Install Oracle 9i. 2) Install Oracle 10g. 3) Relink SFRAC for Oracle 9i. 4) Relink SFRAC for Oracle 10g. 5) Configure different components of SFRAC Enter your choice [1-5]: [?] Select Install Oracle 10g. The installer proceeds to check environment settings. 10. Set Oracle directories. a. When prompted, enter the directory name for CRS_HOME relative to the ORACLE_BASE directory. b. When prompted, enter the directory name for ORACLE_HOME relative to the ORACLE_BASE directory. The Installer proceeds to validate ORACLE_HOME and check the node list. c. Press Enter to continue. Chapter 7, Upgrading and Migrating Oracle Software on Red Hat 127

144 Migrating from Oracle9i to Oracle 10g 11. Configure user accounts. a. Enter Oracle Unix User Account when prompted. The installer checks for the user on all systems. b. Enter Oracle Inventory group when prompted. The installer checks for group existence on all systems c. Press Enter to continue. 12. Enter the Oracle installer path for CRS when prompted. Specify the disk in your Oracle media kit where the CRS binaries reside. The installer validates the CRS installer. Example: /<CRS_Disk>/Disk1 In the example, <CRS_Disk> is the disk where the CRS binaries reside. 13. Enter the oracle installer path when prompted. The installer validates the Oracle Installer, copies files, creates directories based on the information provided, and invokes the Oracle CRS Installer. Example: /DB_Disk/Disk1/ In the example,<db_disk> is the disk where the Oracle binaries reside. When the Oracle CRS Installer appears, it prompts for the following: a. Specify the file source and destination and click Next. b. Specify the name for the install and CRS_HOME and click Next. c. Select the language and click Next. d. The host names of clustered machines display. Specify private names before them for the nodes in the cluster and click Next. e. Specify the Private Interconnect and click Next. f. Specify the OCR file name with an absolute path, for example the /ora_crs/ocr/ocr file, and click Next. 128 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

145 Migrating from Oracle9i to Oracle 10g g. Specify CSS (Vote disk) file name with an absolute path, for example the /ora_crs/css/css file, and click Next. The installer proceeds with the CRS installation and sets the CRS parameters. h. When prompted at the end of the CRS installation, run the $CRS_HOME/root.sh file on each cluster node. i. Exit the CRS Installer after running root.sh and continue with installsfrac -configure for the Oracle 10g binaries installation. 14. Press Enter to continue. The Oracle 10g database installer window displays. a. Specify the file locations and click Next. b. Select all nodes in the cluster and click Next. c. Choose the installation type. The installer verifies that the requirements are all met. d. Provide OCSDBA and OSOPER group for UNIX Oracle users when prompted. e. When prompted by the Create Database option, select No. f. Install the binaries now. g. The installer prompts you to run $ORACLE_HOME/root.sh on each node. The VIPCA utility displays on the first node only. Export LD_ASSUME_KERNEL= See Oracle Metalink Doc id Export DISPLAY=<ip>:0 Execute ORACLE_HOME/root.sh. Select the public NIC for all the systems. Enter the Virtual IP addresses for the public NIC. h. The installer confirms when installation is successful. Exit the Oracle 10g Installer and return to installsfrac -configure. Chapter 7, Upgrading and Migrating Oracle Software on Red Hat 129

146 Migrating from Oracle9i to Oracle 10g 15. In installsfrac -configure, press Enter to continue. The success of the configuration is reported. The configuration summary is saved at: /opt/vrts/install/logs/installsfracxxxxxxxxxx.summary The installsfrac log is saved at: /opt/vrts/install/logs/installsfracxxxxxxxxxx.log. Preparing for Migration from Oracle9i to Oracle 10g Perform the following: 1. Start the Oracle9i instances on all the nodes. 2. Verify that CRS is running on all the nodes. 3. Verify the Oracle 10g Listeners on all the nodes are up and running. 4. Verify that the /etc/oratab file has all entries for the instances on all nodes. Migrating Oracle9i to Oracle 10g 1. Log in as the oracle user. 2. Export the display. 3. Run the dbua utility for the migration and refer to Oracle metalink Doc ID: Note: for a complete checklist for manual upgrades to Oracle 10g. Relinking after Migrating from Oracle9i to Oracle 10g To relink the VERITAS libraries 1. Log in as root user. # su - root 2. Export the ORACLE_BASE and ORACLE_HOME environment variables. 3. Change directory to /opt/vrts/install. # cd /opt/vrts/install 130 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

147 Migrating from Oracle9i to Oracle 10g 4. Start the installer to relink: #./installsfrac -configure 5. The installer checks systems: a. When the installer prompts, enter the system names separated by spaces on which to configure Storage Foundation for Oracle RAC. For the installation example used in this procedure: galaxy nebula The installer checks both systems for communication and creates a log directory on the second system in /var/tmp/installsfracxxxxxxxxxx. b. When the initial system check is successfully completed, press Enter to continue. 6. The installer proceeds to verify the license keys. a. Enter additional licenses now if any are needed. b. When the licenses are successfully verified, press Enter to continue. 7. The installer presents task choices for installing and configuring, depending on the operating system you are running. Example: This program enables you to perform one of the following tasks: 1) Install Oracle 9i. 2) Install Oracle 10g. 3) Relink SFRAC for Oracle 9i. 4) Relink SFRAC for Oracle 10g. 5) Configure different components of SFRAC Enter your choice [1-5]: [?] Select Relink SFRAC for Oracle 10g. The installer proceeds to check environment settings. Chapter 7, Upgrading and Migrating Oracle Software on Red Hat 131

148 Applying Oracle 10g Patchsets on Red Hat 8. Set Oracle directories. a. When prompted, enter the directory name for CRS_HOME relative to /oracle. It will be used relative to the ORACLE_BASE directory. b. When prompted, enter the directory name for ORACLE_HOME relative to /oracle. It will be used relative to the ORACLE_BASE directory. The Installer proceeds to validate ORACLE_HOME and check the node list. c. Press Enter to continue. 9. Set user accounts. a. Enter Oracle Unix User Account when prompted. The installer checks for the user on all systems. b. Enter Oracle Inventory group when prompted. The installer checks for group existence on all systems c. Press Enter to continue. 10. The success of the configuration is reported. Press Enter to continue. The configuration summary is saved at: /opt/vrts/install/logs/installsfracxxxxxxxxxx.summary The installsfrac log is saved at: /opt/vrts/install/logs/installsfracxxxxxxxxxx.log Applying Oracle 10g Patchsets on Red Hat Storage Foundation for Oracle RAC supports Oracle 10g or greater. Since Oracle and are not certified for RHEL4, installing the patch or higher is mandatory. Before applying Oracle to the database home, you must restore the original libodm10.so in $ORACLE_HOME/lib. To apply an Oracle 10g patchset 1. Log in as the oracle user. # su - oracle 132 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

149 Applying Oracle 10g Patchsets on Red Hat 2. Restore the original libodm10.so, libskgxn2.so, and libskgxp10.so in $ORACLE_HOME/lib: on only one node if ORACLE_HOME is shared on each node if ORACLE_HOME is local Execute: $ cd $ORACLE_HOME/lib/ $ ls -l libodm* $ mv libodm10.so libodm10.so.veritas $ cp libodm10.so_<date> libodm10.so For each of file you restore, use the most recently dated file. $ ls -l libskgxn* $ mv libskgxn2.so libskgxn2.so.veritas $ cp libskgxn2.so_<date> libskgxn2.so $ ls -l libskgxp* $ mv libskgxp10.so libskgxp10.so.veritas $ cp libskgxp10.so_<date> libskgxp10.so 3. Apply the patchset. Note Refer to the Oracle 10g patchset Installation and Configuration Guide for instructions on applying patchsets. 4. Perform post-upgrade relinking of the VERITAS libraries. To relink VERITAS libraries After applying an Oracle patchset, the Oracle libraries must be relinked. This post-upgrade reconfiguration allows Oracle to use the VERITAS RAC libraries rather than the native Oracle libraries. This is a required step in the Oracle RAC installation process as Oracle will not function properly in an SFRAC environment without the VERITAS libraries. 1. Log in as root. 2. Export the ORACLE_BASE and ORACLE_HOME environment variables. 3. Change directories to /opt/vrts/install. 4. Execute the file./installsfrac -configure. Chapter 7, Upgrading and Migrating Oracle Software on Red Hat 133

150 Applying Oracle 10g Patchsets on Red Hat 5. The installer checks systems: a. When the installer prompts, enter the system names separated by spaces on which to configure Storage Foundation for Oracle RAC. For the installation example used in this procedure: galaxy nebula The installer checks both systems for communication and creates a log directory on the second system in /var/tmp/installsfracxxxxxxxxxx. b. When the initial system check is successfully completed, press Enter to continue. 6. The installer proceeds to verify the license keys. a. Enter additional licenses now if any are needed. b. When the licenses are successfully verified, press Enter to continue. 7. The installer presents task choices for installing and configuring, depending on the operating system you are running. Example: This program enables you to perform one of the following tasks: 1) Install Oracle 9i. 2) Install Oracle 10g. 3) Relink SFRAC for Oracle 9i. 4) Relink SFRAC for Oracle 10g. 5) Configure different components of SFRAC Enter your choice [1-5]: [?] Select Relink SFRAC for Oracle 10g. The installer proceeds to check environment settings. 8. Set Oracle directories. a. When prompted, enter the directory name for CRS_HOME relative to /oracle. It will be used relative to the ORACLE_BASE directory. b. When prompted, enter the directory name for ORACLE_HOME relative to /oracle. It will be used relative to the ORACLE_BASE directory. The Installer proceeds to validate ORACLE_HOME and check the node list. c. Press Enter to continue. 134 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

151 Applying Oracle 10g Patchsets on Red Hat 9. Set user accounts. a. Enter Oracle Unix User Account when prompted. The installer checks for the user on all systems. b. Enter Oracle Inventory group when prompted. The installer checks for group existence on all systems c. Press Enter to continue. 10. The success of the configuration is reported. Press Enter to continue. The configuration summary is saved at: /opt/vrts/install/logs/installsfracxxxxxxxxxx.summary The installsfrac log is saved at: /opt/vrts/install/logs/installsfracxxxxxxxxxx.log To complete the patchset installation 1. When Oracle9i installation is complete, remove the temporary rsh access permissions you have set for the systems in the cluster. For example, if you added a + character in the first line of the.rhosts file, remove that line. For the complete procedure, see the VERITAS Cluster Server 4.1 Installation Guide for Linux. 2. Check if Oracle is using the VERITAS ODM library. After starting Oracle instances, you can confirm that Oracle is using the VERITAS ODM libraries by examining the Oracle alert file, alert_$oracle_sid.log. Look for the line that reads: Oracle instance running with ODM: VERITAS xxx ODM Library, Version x.x 3. Create an Oracle database on shared storage. Use you own tools or refer to Creating a Starter Database on page 351 to create a starter database. Chapter 7, Upgrading and Migrating Oracle Software on Red Hat 135

152 Applying Oracle 10g Patchsets on Red Hat 136 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

153 Configuring VCS Service Groups for Oracle 10g on Red Hat 8 You can set up VCS to automate the Oracle RAC environment: Creating an Oracle Service Group on page 137 Configuring CVM Service Group for Oracle 10g Manually on page 143 Location of VCS Log Files on page 147 Creating an Oracle Service Group The Oracle 10g RAC configuration wizard guides you through the modification of the existing CVM group to include CRS Application, PrivNIC, CFSMount and CVMVolDg resources. Before starting the wizard, verify that your Oracle installation can be configured: Oracle 10g CRS must be running Oracle RAC instances and listeners must be running on all cluster nodes The database files of all instances must be on shared storage, in a cluster file system or shared raw volumes Make sure you have available the information the wizard needs as it proceeds: Names of the database instances to be configured Private IP addresses used by Oracle 10g CRS 10g CRS OCR location Vote disk location Establishing Graphical Access for the Wizard The configuration wizard requires graphical access to the VCS systems where you want to configure service groups. If your VCS systems do not have monitors, or if you want to run the wizards from a remote UNIX system, do the following: 137

154 Creating an Oracle Service Group To establish graphical access from a remote system 1. From the remote system, (jupiter, for example), run: # xhost + 2. Do one of the following, depending on your shell: For Bourne shell (bash, sh, or ksh), run this step on one of the systems where the wizard is to run, for example galaxy: # export DISPLAY=jupiter:0.0 For C shell (csh), run this step # setenv DISPLAY jupiter: Verify that the DISPLAY environment variable has been updated: # echo $DISPLAY jupiter:0.0 Starting the Configuration Wizard The configuration wizard for Oracle 10g RAC is started at the command line. 1. Log on to one of your VCS systems as root. 2. Start the configuration wizard. # /opt/vrtsvcs/bin/hawizard rac10g 138 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

155 Creating an Oracle Service Group The Welcome Window The VCS RAC Oracle 10g Configuration wizard starts with a Welcome window that highlights the prerequisites for configuration and the information you will need to complete the configuration. If your configuration does not meet the configuration requirements, you can stop the wizard by pressing Cancel. Take the necessary steps to meet the requirements and start the wizard again (see Starting the Configuration Wizard above). The Wizard Discovers the RAC Configuration If you are ready to configure Oracle service group, press Next on the Welcome screen. The wizard begins discovering the current Oracle RAC information before proceeding with the next screen. If the wizard does not find all databases in the cluster, it halts with an error, indicating the problem. Press Cancel, and start the wizard again after you correct the problem. Database Selection Screen The databases and their instances running on the cluster are listed on this screen. Highlight only one of the databases, if more than one are listed, and click Next. Chapter 8, Configuring VCS Service Groups for Oracle 10g on Red Hat 139

156 Creating an Oracle Service Group Private NIC Configuration Screen Configure the Private IP address that is used by Oracle 10g CRS and also select the NIC's that should be used by the Private NIC agent to make this IP address highly available. Specify this information for all the cluster nodes. CRS Configuration Screen Specify the OCR and Voting Disk locations. 140 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

157 Creating an Oracle Service Group Database Configuration Screen If you have installed the database on a cluster file system, the wizard discovers the mount point and displays it on the Database Configuration screen. If the database exists on raw volumes, the wizard discovers the volumes. The Wizard will also discover the OCR and Voting disk mounts if they are present on CFS and display them. If OCR and Voting disk are on raw volumes, the wizard discovers the volumes. You can confirm the mount options displayed, or you can modify them. Service Group Summary Screens After you have configured the database and listener resources, the wizard displays the configuration on a Summary screen. Click Finish to complete the configuration. Chapter 8, Configuring VCS Service Groups for Oracle 10g on Red Hat 141

158 Creating an Oracle Service Group Completing the RAC 10g Configuration Screen Click the Bring the service groups online check-box to start the newly added resources after exiting the wizard. Click Close to exit the wizard. To verify the state of newly added resources 1. Use hargrp -state to check status of the cvm group. 2. Use hargrp -state to check status of resources. To reboot the cluster nodes A reboot is required at this stage so that CRS and Oracle RAC database instances use VERITAS libraries. 1. Stop CRS using /etc/init.d/init.crs stop on all nodes. 2. Stop VCS using /opt/vrtsvcs/bin/hastop -local on all nodes. 3. Reboot all nodes. 142 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

159 Configuring CVM Service Group for Oracle 10g Manually Configuring CVM Service Group for Oracle 10g Manually This section describes how to manually edit the main.cf file to configure the CVM and Oracle service groups. To configure CVM service group for Oracle 10g manually 1. Log in to one system as root. 2. Save your existing configuration to prevent any changes while you modify main.cf: # haconf -dump -makero 3. Make sure VCS is not running while you edit main.cf by using the hastop command to stop the VCS engine on all systems and leave the resources available: # hastop -all -force 4. Make a backup copy of the main.cf file: # cd /etc/vrtsvcs/conf/config # cp main.cf main.orig 5. Using vi or another text editor, edit the main.cf file, modifying the cvm service group and creating Oracle service groups using the sample main.cf as a guideline. Sample main.cf for Oracle 10g include "vcsapachetypes.cf" include "types.cf" include "CFSTypes.cf" include "CVMTypes.cf" include "OracleTypes.cf" include "PrivNIC.cf" cluster ora_cluster ( UserNames = { admin = dophojolpkppnxpjom } Administrators = { admin } HacliUserLevel = COMMANDROOT CounterInterval = 5 UseFence = SCSI3 ) system galaxy ( ) system nebula ( ) Chapter 8, Configuring VCS Service Groups for Oracle 10g on Red Hat 143

160 Configuring CVM Service Group for Oracle 10g Manually group cvm ( SystemList = { galaxy = 0, nebula = 1 } AutoFailOver = 0 Parallel = 1 AutoStartList = { galaxy, nebula } ) CFSMount orabin_mnt ( Critical = 0 MountPoint = "/orabin" BlockDevice = "/dev/vx/dsk/orabindg/orabinvol" ) CFSMount ocrvote_mnt ( Critical = 0 MountPoint = "/ocrvote" BlockDevice = "/dev/vx/dsk/ocrvotedg/ocrvotevol" ) CFSMount oradata_mnt ( Critical = 0 MountPoint = "/oradata" BlockDevice = "/dev/vx/dsk/oradatadg/oradatavol" ) CVMVolDg oradata_voldg ( Critical = 0 CVMDiskGroup = oradatadg CVMVolume = { oradatavol } CVMActivation = sw ) CVMVolDg orabin_voldg ( Critical = 0 CVMDiskGroup = orabindg CVMVolume = { orabinvol } CVMActivation = sw ) CVMVolDg ocrvote_voldg ( Critical = 0 CVMDiskGroup = ocrvotedg CVMVolume = { ocrvotevol } CVMActivation = sw ) CFSfsckd vxfsckd ( ) 144 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

161 Configuring CVM Service Group for Oracle 10g Manually CVMCluster cvm_clus ( CVMClustName = ora_cluster CVMNodeId = { galaxy = 0, nebula = 1 } CVMTransport = gab CVMTimeout = 200 ) CVMVxconfigd cvm_vxconfigd ( Critical = 0 CVMVxconfigdArgs = { syslog } ) PrivNIC ora_priv ( Critical = 0 Device = { eth1 = 0, eth2 = 1} Address@galaxy = " " Address@nebula = " " NetMask = " " ) cvm_clus requires cvm_vxconfigd orabin_voldg requires cvm_clus oradata_voldg requires cvm_clus ocrvote_voldg requires cvm_clus ocrvote_mnt requires vxfsckd orabin_mnt requires vxfsckd oradata_mnt requires vxfsckd ocrvote_mnt requires ocrvote_voldg orabin_mnt requires orabin_voldg oradata_mnt requires oradata_voldg Chapter 8, Configuring VCS Service Groups for Oracle 10g on Red Hat 145

162 Configuring CVM Service Group for Oracle 10g Manually Saving and Checking the Configuration When you finish configuring the CVM and Oracle service groups by editing the main.cf file, verify the new configuration. To save and check the configuration 1. Save and close the main.cf file. 2. Verify the syntax of the file /etc/vrtsvcs/conf/config/main.cf: # cd /etc/vrtsvcs/conf/config # hacf -verify. 3. Start the VCS engine on one system: # hastart 4. Type the command hastatus: # hastatus 5. When LOCAL_BUILD is listed in the message column, start VCS on the other system: # hastart 6. Verify that the service group resources are brought online. On one system, enter: # hagrp -display To verify the state of newly added resources 1. Use hargrp -state to check status of the cvm group. 2. Use hargrp -state to check status of resources. To reboot the cluster nodes A reboot is required to make sure that CRS and Oracle RAC database instances use VERITAS libraries. 1. Stop CRS using /etc/init.d/init.crs stop on all nodes. 2. Stop VCS using /opt/vrtsvcs/bin/hastop -local on all nodes. 3. Reboot all nodes. 146 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

163 Location of VCS Log Files Modifying the VCS Configuration For additional information and instructions on modifying the VCS configuration by editing the main.cf file, refer to the VERITAS Cluster Server User s Guide. Location of VCS Log Files On all cluster nodes, look at the following log files for any errors or status messages: /var/vrtsvcs/log/engine_a.log When large amounts of data are written, multiple log files may be required. For example, engine_b.log, engine_c.log, and so on, may be required. The engine_a.log contains the most recent data. Chapter 8, Configuring VCS Service Groups for Oracle 10g on Red Hat 147

164 148 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

165 Adding and Removing Cluster Nodes for Oracle 10g on Red Hat Linux 9 A cluster running VERITAS Storage Foundation for Oracle RAC can have as many as eight systems. If you have a multi-node cluster running Oracle 10g, you can add or remove a node: Adding a Node to an Oracle 10g Cluster on page 149 Removing a Node from an Oracle 10g Cluster on page 162 Adding a Node to an Oracle 10g Cluster The examples used in these procedures describe adding one node to a two-system cluster. Checking System Requirements for New Node on page 150 Physically Adding a New System to the Cluster on page 150 Installing Storage Foundation for Oracle RAC on the New System on page 150 Starting Volume Manager on page 153 Configuring LLT, GAB, VCSMM, and VXFEN Drivers on page 154 Preparing to Add a Node on page 155 Configuring CVM on page 156 Using the Oracle Add Node Procedure on page 158 Sample main.cf for Adding an Oracle 10g Node on page

166 Adding a Node to an Oracle 10g Cluster Checking System Requirements for New Node Ensure that the new systems meet all requirements for installing and using Storage Foundation for Oracle RAC. The new system must have the identical operating system and patch level as the existing systems. Refer to Reviewing the Requirements on page 22. Use a text window of 80 columns minimum by 24 lines minimum; 80 columns by 24 lines is the recommended size for the optimum display of the installsfrac script. Verify that the file /etc/fstab contains only valid entries, each of which specifies a file system that can be mounted. Physically Adding a New System to the Cluster The new system must have the identical operating system and patch level as the existing systems. When you physically add the new system to the cluster, it must have private network connections to two independent switches used by the cluster and be connected to the same shared storage devices as the existing nodes. Refer to the VERITAS Cluster Server Installation Guide. After installing Storage Foundation for Oracle RAC on the new system and starting VxVM, the new system can access the same shared storage devices. The shared storage devices, including coordinator disks, must be exactly the same among all nodes. If the new node does not see the same disks as the existing nodes, it will be unable to join the cluster, as indicated by an error from CVM on the console. Installing Storage Foundation for Oracle RAC on the New System Check your systems for installation readiness and install Storage Foundation for Oracle RAC. To check a new system for installation readiness 1. Log in to the new system as root. For example, the new system is saturn. 2. Insert the VERITAS software disc containing the product into the new system s CD-ROM drive. The Linux volume-management utility automatically mounts the CD. 150 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

167 Adding a Node to an Oracle 10g Cluster 3. Change to the directory containing the installsfrac program for your Linux distribution and architecture: # cd /rhel4_ia32/storage_foundation_for_oracle_rac # cd /sles9_ia64/storage_foundation_for_oracle_rac # cd /sles9_x86_64/storage_foundation_for_oracle_rac 4. On each new node, run the installsfrac script with the precheck option to verify that the current operating system level, patch level, licenses, and disk space are adequate to enable a successful installation: #./installsfrac -precheck saturn The utility s precheck function proceeds non-interactively. When complete, the utility displays the results of the check and saves the results in a log file. 5. If the check is successful, proceed to run installsfrac with the -installonly option. If the precheck function indicates licensing is required, add the license when running the installation utility. The precheck function indicates other required actions. To install Storage Foundation for Oracle RAC without configuration 1. Check the Storage Foundation for Oracle RAC pre-installation configuration, see Preinstallation Configuration on page To set the PATH variable For the Bourne Shell (bash, sh or ksh): $ export PATH=$PATH:/usr/lib/vxvm/bin:/usr/lib/fs/vxfs:\ /opt/vrtsvxfs/cfs/bin:/opt/vrtsvcs/bin:\ /opt/vrtsvcs/ops/bin:/opt/vrtsob/bin:/etc/vx/bin For the C Shell (csh or tcsh): % setenv PATH $PATH:/usr/lib/vxvm/bin:/usr/lib/fs/vxfs:\ /opt/vrtsvxfs/sbin:/opt/vrtsvcs/bin:/opt/vrtsvcs/ops/bin:\ /opt/vrtsob/bin:/etc/vx/bin Note For root user, do not define paths to a cluster file system in the LD_LIBRARY_PATH variable. For example, define $ORACLE_HOME/lib in LD_LIBRARY_PATH for user oracle only. Chapter 9, Adding and Removing Cluster Nodes for Oracle 10g on Red Hat Linux 151

168 Adding a Node to an Oracle 10g Cluster 3. On the new system, use the -installonly option. This option enables you to install Storage Foundation for Oracle RAC on the new system without performing configuration. The configuration from the existing cluster systems is used. #./installsfrac -installonly After presenting a copyright message, the script prompts for the name of the system and performs initial checks. 4. The script displays messages and prompts you to start installation if you are ready to continue. Enter Y. 5. As the installation proceeds, it checks for the presence of infrastructure component packages, VRTScpi and VRTSvlic, and installs them if they are not present. 6. The installer prompts you for a license, if one is not present. Checking SFRAC license key on saturn... not licensed Enter a SFRAC license key for saturn: [?] XXXX-XXXX-XXXX-XXXX-XX Registering VERITAS Storage Foundation for Oracle RAC PERMANENT key on saturn Do you want to enter another license key for saturn? [y,n,q,?] (n) SFRAC licensing completed successfully. 7. The installer lists the packages and patches it will install and checks whether any of them are present on the system. 8. After installsfrac installs the packages and patches, it summarizes the installation with the following messages: Installation of Storage Foundation for Oracle RAC 4.1 has completed successfully. The installation summary is saved at: /opt/vrts/install/logs/installsfracdate_time.summary The installsfrac log is saved at: /opt/vrts/install/logs/installsfracdate_time.log The installation response file is saved at: /opt/vrts/install/logs/installsfracdate_time.response Storage Foundation for Oracle RAC cannot be started without configuration. Run installsfrac -configure when you are ready to configure SFRAC. 152 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

169 Adding a Node to an Oracle 10g Cluster Note Ignore the message advising that you must run installsfrac -configure. When adding a node to a cluster running Storage Foundation for Oracle RAC, you must manually configure the system using the following procedure. Starting Volume Manager Use the vxinstall utility. As you run the utility, entering N prompts you about licensing. You installed the appropriate license when you ran the installsfrac utility. To start Volume Manager 1. Run the installer: # vxinstall VxVM uses license keys to control access. If you have a SPARCstorage Array (SSA) controller or a Sun Enterprise Network Array (SENA) controller attached to your system, then VxVM will grant you a limited use license automatically. The SSA and/or SENA license grants you unrestricted use of disks attached to an SSA or SENA controller, but disallows striping, RAID-5, and DMP on non-ssa and non-sena disks. If you are not running an SSA or ENA controller, then you must obtain a license key to operate. Licensing information: Some licenses are already installed. Do you wish to review them [y,n,q] (default: y) n Do you wish to enter another license key [y,n,q] (default: n) n 2. Enter N when prompted to select enclosure-based naming for all disks. Do you want to use enclosure based names for all disks? [y,n,q,?] (default: n) n Sep 24 14:22:06 thor181 vxdmp: NOTICE: VxVM vxdmp V added disk array DISKS, datype = Disk Starting the cache deamon, vxcached. 3. Enter N to setup a disk group for the system. Do you want to setup a system wide default disk group? [y,n,q,?] (default: y) n The installation is successfully completed. 4. Verify that the daemons are up and running. Enter the command: # vxdisk list The output should display the shared disks without errors. 5. Edit /etc/sysconfig/vcs by setting ONENODE=no. Chapter 9, Adding and Removing Cluster Nodes for Oracle 10g on Red Hat Linux 153

170 Adding a Node to an Oracle 10g Cluster Configuring LLT, GAB, VCSMM, and VXFEN Drivers To configure LLT, GAB, VCSMM, and VXFEN drivers 1. On the new system, modify the file /etc/sysctl.conf to set the shared memory and other parameter required by Oracle; refer to the documentation: B , Oracle 10g Installation Guide, for details. The value of the shared memory parameter is put to effect when the system restarts. 2. Edit the file /etc/llthosts on the two existing systems. Using vi or another text editor, add the line for the new node to the file. The file should resemble: 0 galaxy 1 nebula 2 saturn 3. Copy the /etc/llthosts file from one of the existing systems over to the new system. The /etc/llthosts file must be identical on all nodes in the cluster. 4. Create an /etc/llttab file on the new system. For example: set-node saturn set-cluster 7 link eth0 eth-<mac ADDRESS FOR eth0> - ether - - link eth1 eth-<mac ADDRESS FOR eth1> - ether - - The second line, the cluster ID, must be the same as in the existing nodes. 5. Use vi or another text editor to create the file /etc/gabtab on the new system. It should resemble the following example: /sbin/gabconfig -c -nn Where N represents the number of systems in the cluster. For a three-system cluster, N would equal Edit the /etc/gabtab file on each of the existing systems, changing the content to match the file on the new system. 7. Set up the /etc/vcsmmtab and /etc/vxfendg files on the new system by copying them from one of the other existing nodes: # scp galaxy:/etc/vcsmmtab /etc # scp galaxy:/etc/vxfendg /etc 154 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

171 Adding a Node to an Oracle 10g Cluster 8. Run the commands to start LLT and GAB on the new node: # /etc/init.d/llt start # /etc/init.d/ab start 9. On the new node, start the VXFEN, VCSMM, and LMX drivers. Remove the /etc/vxfenmode file to enable fencing. Use the commands in the order shown: # rm /etc/vxfenmode # /etc/init.d/vxfen start # /etc/init.d/vcsmm start # /etc/init.d/lmx start 10. On the new node, start the GMS and ODM drivers. Use the commands in the order shown: # /etc/init.d/vxgms start # /etc/init.d/vxodm start 11. On the new node, verify that the GAB port memberships are a, b, d, and o. Run the command: # /sbin/gabconfig -a GAB Port Memberships Preparing to Add a Node Before configuring using the Oracle Add Node procedure, you must obtain IP addresses and configure CVM. To prepare for installing Oracle 1. Obtain two IP addresses: one for the private interconnect, which should be non-routable one public IP to be plumbed as alias against the host interface, which must be on the same subnet as the system network interface 2. Create a local group and local user for Oracle. Be sure to assign the same group ID, user ID, and home directory as exists on the systems in the current cluster. # groupadd -g 1000 oinstall # groupadd -g 1001 dba # groupadd -g 1002 oper # useradd -g dba -u d /lhome/oracle oracle Chapter 9, Adding and Removing Cluster Nodes for Oracle 10g on Red Hat Linux 155

172 Adding a Node to an Oracle 10g Cluster 3. Create a password for the user oracle: # passwd oracle 4. Create the directory structure for all shared mount points as defined in the main.cf configuration file. Include the Oracle OCR and Vote disk mount point if on the file system, the Oracle binaries if on CFS, and the Oracle database. The directory structure must be same as defined on the systems in the current cluster. Example of mount point for OCR and Vote disk: # mkdir p /ora_crs/ Example of mount point for Oracle binaries # mkdir-p /oracle/src10g/ Example of mount point for Oracle database: # mkdir p /oracle/src10g/app/oracle/oradata/<sid> 5. Change ownership and group to Oracle user. # chown R oracle:dba /ora_crs # chown R oracle:dba /oracle Configuring CVM As root user execute the following on the CVM master node only. To configure the CVM group in the main.cf file 1. To determine the CVM master node, execute: # vxdctl c mode 2. Make a backup copy of the main.cf file. # cd /etc/vrtsvcs/conf/config # cp main.cf main.cf.2node 156 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

173 Adding a Node to an Oracle 10g Cluster 3. Use the commands to reconfigure the CVM group. Execute: # haconf -makerw # hasys -add saturn # hagrp -modify cvm SystemList -add saturn 2 # hagrp -modify cvm AutoStartList -add saturn # hares -modify ora_priv Device add eth2 0 -sys saturn # hares -modify ora_priv Device add eth3 1 -sys saturn # hares -modify ora_priv Address sys saturn # hares -modify cvm_clust CVMNodeId -add saturn 2 # haconf -dump makero 4. Verify the syntax of main.cf file: # hacf verify. 5. Stop the VCS engine on all systems, leaving the resources available. # hastop all force 6. Copy the new version of the main.cf to each system in the cluster including the newly added system. # rcp (or scp) main.cf nebula:/etc/vrtsvcs/conf/config # rcp (or scp) main.cf saturn:/etc/vrtsvcs/conf/config In this example, galaxy is the system where the main.cf is edited and does not need a copy. 7. Start VCS on the CVM master. # hastart 8. Verify the CVM group has come online. # hastatus sum 9. To enable the existing cluster to recognize the new node, execute on the current node: # /etc/vx/bin/vxclustadm m vcs t gab reinit # /etc/vx/bin/vxclustadm nidmap 10. Repeat steps 7 through 9 on each system in the existing cluster. Chapter 9, Adding and Removing Cluster Nodes for Oracle 10g on Red Hat Linux 157

174 Adding a Node to an Oracle 10g Cluster 11. Verify whether CVM has started on the newly added node. a. Determine the node ID: # cat /etc/llthost b. Verify this host ID is seen by the GAB module. # gabconfig a c. Start the VCS engine. If on the newly added node ports f, u, v, or w were present before hastart, then the newly added node must be rebooted to properly start the VCS: # shutdown -r now If on the newly added node ports f, u, v, or w were not present before hastart, then use the following command to start VCS: # hastart 12. Verify the CVM group has come online on the newly added node. # hastatus -sum Using the Oracle Add Node Procedure For the Oracle procedure for adding a node, refer to: Metalink Article , Adding a Node to a 10g RAC Cluster In this procedure, Oracle copies the CRS_HOME and ORACLE_HOME from an existing node in the cluster. Based on VERITAS testing using Metalink Article , Last Revision Date: 14-JAN-2005, some anomalies may occur. The following notes address anomalies noted in VERITAS testing of the Oracle add node procedure by adding to the information contained in the following headings from the Oracle Metalink document. This information is intended to supplement, not replace, the Oracle document. Part A: Configure the OS and hardware for the new node There are no specific issues to mention in these procedures. Part B: Add the node to the cluster There are no specific issues to mention in these procedures. 158 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

175 Adding a Node to an Oracle 10g Cluster Part C: Add the Oracle Database software (with RAC option) to the new node If VIPCA fails to start export LD_ASSUME_KERNEL as follows: For csh # setenv LD_ASSUME_KERNEL For other shells (bash, ksh) # export LD_ASSUME_KERNEL= Part D: Reconfigure listeners for new node There are no specific issues to mention in these procedures. Part E: Add instances via DBCA There are no specific issues to mention in these procedures. Chapter 9, Adding and Removing Cluster Nodes for Oracle 10g on Red Hat Linux 159

176 Adding a Node to an Oracle 10g Cluster Sample main.cf for Adding an Oracle 10g Node Changes to the sample main.cf for adding a node are highlighted in red. include "vcsapachetypes.cf" include "types.cf" include "CFSTypes.cf" include "CVMTypes.cf" include "OracleTypes.cf" include "PrivNIC.cf" cluster ora_cluster ( UserNames = { admin = dophojolpkppnxpjom } Administrators = { admin } HacliUserLevel = COMMANDROOT CounterInterval = 5 UseFence = SCSI3 ) system galaxy ( ) system nebula ( ) system saturn ( ) group cvm ( SystemList = { galaxy = 0, nebula = 1, saturn = 2 } AutoFailOver = 0 Parallel = 1 AutoStartList = { galaxy, nebula, saturn } ) CFSMount orabin_mnt ( Critical = 0 MountPoint = "/orabin" BlockDevice = "/dev/vx/dsk/orabindg/orabinvol" ) CFSMount ocrvote_mnt ( Critical = 0 MountPoint = "/ocrvote" BlockDevice = "/dev/vx/dsk/ocrvotedg/ocrvotevol" ) CFSMount oradata_mnt ( Critical = 0 MountPoint = "/oradata" BlockDevice = "/dev/vx/dsk/oradatadg/oradatavol" ) CVMVolDg oradata_voldg ( Critical = 0 CVMDiskGroup = oradatadg CVMVolume = { oradatavol } 160 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

177 Adding a Node to an Oracle 10g Cluster CVMActivation = sw ) CVMVolDg orabin_voldg ( Critical = 0 CVMDiskGroup = orabindg CVMVolume = { orabinvol } CVMActivation = sw ) CVMVolDg ocrvote_voldg ( Critical = 0 CVMDiskGroup = ocrvotedg CVMVolume = { ocrvotevol } CVMActivation = sw ) CFSfsckd vxfsckd ( ) CVMCluster cvm_clus ( CVMClustName = ora_cluster CVMNodeId = { galaxy = 0, nebula = 1, saturn = 2 } CVMTransport = gab CVMTimeout = 200 ) CVMVxconfigd cvm_vxconfigd ( Critical = 0 CVMVxconfigdArgs = { syslog } ) PrivNIC ora_priv ( Critical = 0 Device = { eth1 = 0, eth2 = 1} Address@galaxy = " " Address@nebula = " " Address@saturn = " " NetMask = " " ) cvm_clus requires cvm_vxconfigd orabin_voldg requires cvm_clus oradata_voldg requires cvm_clus ocrvote_voldg requires cvm_clus ocrvote_mnt requires vxfsckd orabin_mnt requires vxfsckd oradata_mnt requires vxfsckd ocrvote_mnt requires ocrvote_voldg orabin_mnt requires orabin_voldg oradata_mnt requires oradata_voldg Chapter 9, Adding and Removing Cluster Nodes for Oracle 10g on Red Hat Linux 161

178 Removing a Node from an Oracle 10g Cluster Removing a Node from an Oracle 10g Cluster The examples used in these procedures describe removing one node from a three-system cluster. Removing a Node from an Oracle 10g Cluster on page 162 Running the uninstallsfrac Utility on page 163 Removing the Infrastructure Packages on page 166 Editing VCS Configuration Files on Existing Nodes on page 167 Removing Licenses on page 169 Sample main.cf for Removing an Oracle 10g Node on page 170 Removing a Node from an Oracle 10g Cluster For the Oracle procedure for removing a node, refer to: Metalink document ID# , Removing a Node from a 10g RAC Cluster Follow the instructions provided for removing a node from an Oracle 10g cluster.based on VERITAS testing using Metalink Article , Removing a Node from a 10g RAC Cluster, Last Revision Date: 06-OCT-2004 finds some issues when these procedures were executed. Part A: Remove the instance using DBCA Step 5: You are asked to supply the system privilege user name and password. VERITAS uses user sys. Part B: Remove the Node from the Cluster Step 1: The root user determines the node name and node number on each node as stored in the Cluster Registry. The command olsnodes requires the option -n to display the node number. Example: $CRS_HOME/bin/olsnodes n Step 9: In this step you are redefining the list of nodes which remain in the cluster. The following error occurred upon executing this command in our environment: PRKC-1002 : All the submitted commands did not execute successfully 162 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

179 Removing a Node from an Oracle 10g Cluster In the Oracle log in the $ORACLE_BASE/oraInventory/logs directory we found the following error as per example: /oracle/src10g/app/oracle/orainventory/logs/updatenodelist _ PM.log Unable to read /oracle/src10g/app/oracle/orainventory/contents/libslist.ser. Some inventory information may be lost. This may be a result of the issues discussed in Metalink Article, , CRS Home Is Only Partially Copied to Remote Node, or other installation issues. In either case, if you have a questions, please contact Oracle Support for further information. Step 13: This step must be executed on the node which will be deleted. Step 19: VERITAS tested extensively using a cluster of eight nodes and deleted only one of the nodes. The $ORACLE_BASE/oraInventory/ContentsXML/inventory.xml file was not consistent among all nodes in the cluster. VERITAS testers believe this is due to the issue described in Metalink Article , referencing partial copy of the orainventory directory structure to remote nodes. Contact Oracle Support for further assistance if needed. Running the uninstallsfrac Utility You can run the script from any node in the cluster, including a node from which you are uninstalling Storage Foundation for Oracle RAC. Note Prior to invoking the uninstallsfrac script, all service groups must be brought offline and VCS must be shut down. For this example, Storage Foundation for Oracle RAC is removed from the node named saturn. To run the uninstallsfrac utility 1. Before starting./uninstallsfrac, execute: #/opt/vrtsvcs/bin/hastop -local 2. As root user, start the uninstallation from any node from which you are uninstalling Storage Foundation for Oracle RAC. Enter: # cd /opt/vrts/install #./uninstallsfrac Chapter 9, Adding and Removing Cluster Nodes for Oracle 10g on Red Hat Linux 163

180 Removing a Node from an Oracle 10g Cluster 3. The welcoming screen appears, followed by a notice that the utility discovers configuration files on the system. The information lists all the systems in the cluster and prompts you to indicate whether you want to uninstall from all systems. You must answer N. For example: VCS configuration files exist on this system with the following information: Cluster Name: rac_cluster1 Cluster ID Number: 7 Systems: galaxy nebula saturn Service Groups: cvm oradb1_grp Do you want to uninstall SFRAC from these systems? [y,n,q] (y) n Caution Be sure to answer N. Otherwise the utility begins the procedure to uninstall Storage Foundation for Oracle RAC from all systems. 4. The installer prompts you to specify the name of the system from which you are uninstalling Storage Foundation for Oracle RAC: Enter the system names separated by spaces on which to uninstall SFRAC:saturn 5. The uninstaller checks for Storage Foundation for Oracle RAC packages currently installed on your system. It also checks for dependencies between packages to determine the packages it can safely uninstall and in which order. Checking SFRAC packages installed on saturn: Checking VRTSvxmsa package... version 4.1 installed Checking VRTSvxmsa dependencies... no dependencies Checking VRTSvcsvr package... version 4.1 installed Checking VRTSvcsvr dependencies... no dependencies Checking VRTSvcsvp package... version 4.1 installed Checking VRTSvcsvp dependencies... no dependencies.. Checking VRTSgab package... version 4.1 installed Checking VRTSgab dependencies... VRTSdbac VRTSglm VRTSgms VRTSvcs VRTSvcsag VRTSvcsmg VRTSvxfen Checking VRTSllt package... version 4.1 installed Checking VRTSllt dependencies... VRTSdbac VRTSgab VRTSglm VRTSgms VRTSvxfen. 6. You can proceed with uninstallation when the uninstaller is ready: uninstallsfrac is now ready to uninstall SFRAC packages. All SFRAC processes that are currently running will be stopped. Are you sure you want to uninstall SFRAC packages? [y,n,q] (y) 164 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

181 Removing a Node from an Oracle 10g Cluster 7. When you press Enter to proceed, the uninstaller stops process and drivers running on each system, and reports its activities. Stopping SFRAC processes on saturn: Checking VRTSweb process... not running Checking had process... running Stopping had... Done Checking hashadow process... not running Checking CmdServer process... running Killing CmdServer... Done Checking notifier process... not running Checking vcsmm driver... vcsmm module loaded Stopping vcsmm driver... Done.. 8. When the installer begins removing packages from the systems, it indicates its progress by listing each step of the total number of steps required. For example: Uninstalling Storage Foundation for Oracle RAC 4.1 on all systems simultaneously: Uninstalling VRTSvxmsa 4.1 on saturn... Done 1 of 40 steps Uninstalling VRTSdbckp 4.1 on saturn... Done 2 of 40 steps Uninstalling VRTSvcsvr 4.1 on saturn... Done 3 of 40 steps Uninstalling VRTScsocw 4.1 on saturn... Done 4 of 40 steps.. Uninstalling VRTSvmpro 4.1 on saturn... Done 37 of 40 steps Uninstalling VRTSvmdoc 4.1 on saturn...done 38 of 40 steps Uninstalling VRTSvmman 4.1 on saturn...done 39 of 40 steps Uninstalling VRTSvxvm 4.1 on saturn...done 40 of 40 steps Storage Foundation for Oracle RAC package uninstall completed successfully. 9. When the uninstaller is done, it describes the location of a summary file and a log of uninstallation activities. Uninstallation of Storage Foundation for Oracle RAC has completed successfully. The uninstallation summary is saved at: /opt/vrts/install/logs/uninstallsfracdate_time.summary The uninstallsfrac log is saved at: /opt/vrts/install/logs/uninstallsfracdate_time.log Chapter 9, Adding and Removing Cluster Nodes for Oracle 10g on Red Hat Linux 165

182 Removing a Node from an Oracle 10g Cluster Removing the Infrastructure Packages Some packages installed by installsfrac are not removed by uninstallsfrac. These are the infrastructure packages, which can be used by VERITAS products other than VERITAS Storage Foundation for Oracle RAC. These packages include: VRTScpi, VERITAS Cross Product/Platform Installation VRTSob, VERITAS Enterprise Administrator Service VRTSobgui, VERITAS Enterprise Administrator VRTSperl, VERITAS Perl Redistribution VRTSvlic, VERITAS License Utilities To remove the infrastructure packages using uninstallinfr 1. As root user, change to the directory containing the uninstallinfr program. # cd /opt/vrts/install 2. Enter the following command to start uninstallinfr: #./uninstallinfr The opening screen appears, prompting you to enter the system from which you want to remove the infrastructure packages: Enter the system names separated by spaces on which to uninstall Infrastructure: saturn 3. The program checks the operating system on the system and sets up a log file at /var/tmp/uninstallinfrdate_time. 4. The program then checks the infrastructure packages on the system, determining whether or not there are dependencies upon them. Packages on which dependencies exist are not uninstalled. The packages having no dependencies can be removed. The program prompts you to continue: Are you sure you want to uninstall Infrastructure packages? [y,n,q] (y) y 166 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

183 Removing a Node from an Oracle 10g Cluster 5. The utility stops the infrastructure processes that are running, if any, and proceeds to uninstall the packages from the system. When the infrastructure packages are successfully removed, a message resembling the following is displayed: Uninstallation of Infrastructure has completed successfully. The uninstallation summary is saved at: /opt/vrts/install/logs/uninstallinfrdate_time.summary The uninstallinfr log is saved at: /opt/vrts/install/logs/uninstallinfrdate_time.log Editing VCS Configuration Files on Existing Nodes After running uninstallsfrac and unistallinfr, modify the configuration files on the existing remaining nodes to remove references to the deleted node(s). Edit /etc/llthosts On the each of the existing nodes, using vi or another editor, edit the file /etc/llthosts, removing lines corresponding to the removed nodes. For example, if saturn is the node being removed from the cluster, remove the line 2 saturn from the file: 0 galaxy 1 nebula 2 saturn It should now resemble: 0 galaxy 1 nebula Edit /etc/gabtab In the file /etc/gabtab, change the command contained in the file to reflect the number of systems after the node is removed: /sbin/gabconfig -c -nn where N is the number of nodes remaining. For example, with two nodes remaining, the file resembles: /sbin/gabconfig -c -n2 Chapter 9, Adding and Removing Cluster Nodes for Oracle 10g on Red Hat Linux 167

184 Removing a Node from an Oracle 10g Cluster Modify the VCS Configuration to Remove a System You can modify the VCS configuration using one of three possible methods. You can edit /etc/vrtsvcs/conf/config/main.cf (the VCS configuration file) directly, you can use the VCS GUI (Cluster Manager), or you can use the command line, as illustrated in the following example. Please refer to the VERITAS Cluster Server User s Guide for details about how to configure VCS. At this point in the process, all Oracle binaries have been removed from the system to be deleted. The instance has been removed from the database, that is, the thread disabled, and the spfile<sid>.ora edited by Oracle to remove any references to this instance. The next step is to remove all references in the main.cf to the deleted node(s). As root user execute the following on the CVM master node only. To modify the CVM group in the main.cf file 1. To determine the CVM master node execute: # vxdctl c mode 2. Make a backup copy of the main.cf file. # cd /etc/vrtsvcs/conf/config # cp main.cf main.cf.3node.bak 3. Use the following commands to reconfigure the CVM group. Execute: # haconf -makerw # hagrp -modify cvm SystemList -delete saturn # hares -modify cvm_clus CVMNodeId -delete saturn # hasys -delete saturn # haconf -dump -makero Example of main.cf file: see Sample main.cf for Removing an Oracle 10g Node on page Verify the syntax of main.cf file: # hacf verify. The main.cf file now should not contain entries for system saturn. 5. Stop the VCS engine on all systems, leaving the resources available. # hastop all force 168 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

185 Removing a Node from an Oracle 10g Cluster 6. Copy the new version of the main.cf to each system in the cluster. # rcp (or scp) main.cf galaxy:/etc/vrtsvcs/conf/config # rcp (or scp) main.cf nebula:/etc/vrtsvcs/conf/config 7. Start the VCS engine on the current system. # hastart 8. Verify the CVM group has come online. # hastatus sum 9. Repeat commands step 7 through step 8 on each system in the existing cluster. Removing Licenses Go to the directory /etc/vx/licenses/lic and remove unwanted licenses. You may rename them or move them to another directory if you do not want to remove them outright. Chapter 9, Adding and Removing Cluster Nodes for Oracle 10g on Red Hat Linux 169

186 Removing a Node from an Oracle 10g Cluster Sample main.cf for Removing an Oracle 10g Node Changes to the sample main.cf for removing a node are highlighted using red strike through. include "vcsapachetypes.cf" include "types.cf" include "CFSTypes.cf" include "CVMTypes.cf" include "OracleTypes.cf" include "PrivNIC.cf" cluster ora_cluster ( UserNames = { admin = dophojolpkppnxpjom } Administrators = { admin } HacliUserLevel = COMMANDROOT CounterInterval = 5 UseFence = SCSI3 ) system galaxy ( ) system nebula ( ) system saturn ( ) group cvm ( SystemList = { galaxy = 0, nebula = 1, saturn = 2 } AutoFailOver = 0 Parallel = 1 AutoStartList = { galaxy, nebula, saturn } ) CFSMount orabin_mnt ( Critical = 0 MountPoint = "/orabin" BlockDevice = "/dev/vx/dsk/orabindg/orabinvol" ) CFSMount ocrvote_mnt ( Critical = 0 MountPoint = "/ocrvote" BlockDevice = "/dev/vx/dsk/ocrvotedg/ocrvotevol" ) CFSMount oradata_mnt ( Critical = 0 MountPoint = "/oradata" BlockDevice = "/dev/vx/dsk/oradatadg/oradatavol" ) 170 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

187 Removing a Node from an Oracle 10g Cluster CVMVolDg oradata_voldg ( Critical = 0 CVMDiskGroup = oradatadg CVMVolume = { oradatavol } CVMActivation = sw ) CVMVolDg orabin_voldg ( Critical = 0 CVMDiskGroup = orabindg CVMVolume = { orabinvol } CVMActivation = sw ) CVMVolDg ocrvote_voldg ( Critical = 0 CVMDiskGroup = ocrvotedg CVMVolume = { ocrvotevol } CVMActivation = sw ) CFSfsckd vxfsckd ( ) CVMCluster cvm_clus ( CVMClustName = ora_cluster CVMNodeId = { galaxy = 0, nebula = 1, saturn = 2 } CVMTransport = gab CVMTimeout = 200 ) CVMVxconfigd cvm_vxconfigd ( Critical = 0 CVMVxconfigdArgs = { syslog } ) PrivNIC ora_priv ( Critical = 0 Device = { eth1 = 0, eth2 = 1} Address@galaxy = " " Address@nebula = " " Address@saturn = " " NetMask = " " ) Chapter 9, Adding and Removing Cluster Nodes for Oracle 10g on Red Hat Linux 171

188 Removing a Node from an Oracle 10g Cluster 172 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

189 Uninstalling Storage Foundation for Oracle RAC on Oracle 10g Systems 10 At the completion of the uninstallation activity, you can continue to run Oracle using the single-instance binary generated when you unlink the VERITAS binaries from Oracle. Uninstalling Storage Foundation for Oracle RAC on Oracle 10g: Offlining Service Groups on page 173 Unlinking Oracle 10g Binaries from VERITAS Libraries on page 174 Removing Storage Foundation for Oracle RAC Packages on page 175 Removing Other VERITAS Packages on page 178 Removing License Files on page 178 Removing Other Configuration Files on page 178 Offlining Service Groups Offline all service groups and shutdown VCS prior to launching the uninstallsfrac script: # /etc/init.d/bin/init.crs stop # /opt/vrtsvcs/bin/hastop -all Note Do not use the -force option when executing hastop. This will leave all service groups online and shut down VCS, causing undesired results during the uninstallation of Storage Foundation for Oracle RAC. 173

190 Unlinking Oracle 10g Binaries from VERITAS Libraries Unlinking Oracle 10g Binaries from VERITAS Libraries To unlink Oracle 10g binaries from VERITAS libraries 1. Log in as the oracle user. # su oracle 2. Change to the ORACLE_HOME directory. $ cd $ORACLE_HOME/lib 3. Restore the original Oracle libraries from the backup copies. There could be multiple backup copies as <library_name>_xx_xx_xx-xx_xx_xx. Run the following command to verify that it is an oracle library. $ strings <library_name>_xx_xx_xx-xx_xx_xx grep -i veritas. The output should not have "veritas" string. Out of the libraries which do not have the "veritas" string select the library with the latest time stamp use the following procedure to restore: $ rm libskgxp10.so $ cp libskgxp10.so_xx_xx_xx-xx_xx_xx libskgxp10.so $ rm libskgxn2.so $ cp libskgxn2.so_xx_xx_xx-xx_xx_xx libskgxn2.so $ rm libodm10.so $ cp libodm10.so_xx_xx_xx-xx_xx_xx libodm10.so 4. Change to the CRS_HOME directory. $ cd CRS_HOME/lib 5. Restore the original CRS library. Use the method in step 3 to identify the library to restore. $ rm libskgxn2.so $ cp libskgxn2.so_xx_xx_xx-xx_xx_xx libskgxn2.so $ rm libskgxp10.so $ cp libskgxp10.so_xx_xx_xx-xx_xx_xx libskgxp10.so 6. Change to the ORACLE _HOME directory and update the library. Use the method in step 3 to identify the library to restore. # cd ORACLE_HOME/lib32 # rm libskgxn2.so # cp libskgxn2.so_xx_xx_xx-xx_xx_xx libskgxn2.so 174 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

191 Removing Storage Foundation for Oracle RAC Packages 7. Change to the CRS_HOME directory and update the library. Use the method in step 3 to identify the library to restore. # cd CRS_HOME/lib32 # rm libskgxn2.so # cp libskgxn2.so_xx_xx_xx-xx_xx_xx libskgxn2.so 8. Restore the init script: $ cp /etc/init.d/init.cssd.orig /etc/init.d/init.cssd If the Oracle was installed manually, the init.cssd.orig will not be present; in this case remove the following from the init.cssd script: #Veritas Cluster server if [ -d "/opt/vrtsvcs" ] then VENDOR_CLUSTER_DAEMON_CHECK="/opt/VRTSvcs/ops/bin/checkvcs" CLINFO="/opt/VRTSvcs/ops/bin/clsinfo" fi Removing Storage Foundation for Oracle RAC Packages The uninstallsfrac script removes packages installed by installsfrac on all systems in the cluster. The installer removes all Maintenance Pack 1 packages as well as the Storage Foundation for Oracle RAC 4.1 rpms, regardless of the version of Oracle used. To run uninstallsfrac 1. As root user, change to the directory containing the uninstallsfrac program. # cd /opt/vrts/install 2. Enter the following command to start uninstallsfrac: #./uninstallsfrac The utility reports the cluster and systems for uninstalling. VCS configuration files exist on this system with the following information: Cluster Name: rac_cluster1 Cluster ID Number: 7 Systems: galaxy nebula Service Groups: ClusterService Do you want to uninstall SFRAC from these systems? [y,n,q] (y) y Chapter 10, Uninstalling Storage Foundation for Oracle RAC on Oracle 10g Systems 175

192 Removing Storage Foundation for Oracle RAC Packages 3. Enter y if the cluster information is correct. The utility performs and displays a series of system checks for operating system level and system-to-system communication. Note that a log, with the date and time as part of its name, is created for the uninstallation: Logs for uninstallsfrac are being created in /var/tmp/uninstallsfracxxxxxxxxx. Using /usr/bin/ssh -x and /usr/bin/scp to communicate with remote systems. Initial system check completed successfully. Press [Enter] to continue: The uninstaller checks for Storage Foundation for Oracle RAC packages currently installed on your system, and for dependencies between packages to determine the packages it can safely uninstall and in which order. The uninstaller stops process and drivers running on each system, and reports its activities. Are you sure you want to uninstall SFRAC rpms? [y,n,q] (y) y Stopping Cluster Manager and Agents on galaxy... Done Stopping Cluster Manager and Agents on nebula... Done Checking galaxy for open volumes... None Checking for patch(1) rpm... version installed Stopping SFRAC processes on galaxy: Checking vxodm driver... vxodm module loaded Stopping vxodm driver... Done Unloading vxodm module on galaxy... Done Checking vxgms driver... vxgms module loaded Stopping vxgms driver... Done Unloading vxgms module on galaxy... Done Checking had process... not running Checking hashadow process... not running Checking CmdServer process... running Killing CmdServer... Done Checking notifier process... not running Checking vxsvc process... running Stopping vxsvc... Done Checking vcsmm driver... not running Checking lmx driver... not running Checking vxfen driver... not running Checking gab driver... gab module loaded Stopping gab driver... Done Unloading gab module on galaxy... Done Checking llt driver... llt module loaded Stopping llt driver... Done Unloading llt module on... Done Checking nebula for open volumes... None Checking for patch(1) rpm... version installed Stopping SFRAC processes on nebula: Checking vxodm driver... vxodm module loaded 176 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

193 Removing Storage Foundation for Oracle RAC Packages Stopping vxodm driver... Done Unloading vxodm module on nebula... Done Checking vxgms driver... vxgms module loaded Stopping vxgms driver... Done Unloading vxgms module on nebula... Done Checking had process... not running Checking hashadow process... not running Checking CmdServer process... running Killing CmdServer... Done Checking notifier process... not running Checking vxsvc process... running Stopping vxsvc... Done Checking vcsmm driver... not running Checking lmx driver... not running Checking vxfen driver... not running Checking gab driver... gab module loaded Stopping gab driver... Done Unloading gab module on nebula... Done Checking llt driver... llt module loaded Stopping llt driver... Done Unloading llt module on... Done 4. When the installer begins removing packages from the systems, it indicates its progress by listing each step of the total number of steps required. For example: Uninstalling Storage Foundation for Oracle RAC 4.1 on all systems simultaneously: Uninstalling VRTScsocw on nebula... Done Uninstalling VRTScsocw on galaxy... Done Uninstalling VRTSvcsor on nebula... Done Uninstalling VRTSvcsor on galaxy... Done Uninstalling VRTSdbac on nebula... Done Uninstalling VRTSdbac on galaxy... Done Uninstalling VRTSglm on nebula... Done Uninstalling VRTSglm on galaxy... Done Uninstalling VRTScavf on galaxy... Done Storage Foundation for Oracle RAC rpm uninstall completed successfully: 5. When the uninstaller is done, it describes the location of a summary file and a log of uninstallation activities. Uninstallation of Storage Foundation for Oracle RAC has completed successfully. The uninstallation summary is saved at: /opt/vrts/install/logs/uninstallsfracxxxxxxxxx.summary The uninstallsfrac log is saved at: /opt/vrts/install/logs/uninstallsfracxxxxxxxxx.log Chapter 10, Uninstalling Storage Foundation for Oracle RAC on Oracle 10g Systems 177

194 Removing Other VERITAS Packages Removing Other VERITAS Packages Some packages installed by installsfrac are not removed by uninstallsfrac. These packages, which can be used by VERITAS products other than VERITAS Storage Foundation for Oracle RAC, include: VRTScpi VRTSob VRTSobgui VRTSperl VRTSvlic VERITAS Cross Product/Platform Installation VERITAS Enterprise Administrator Service VERITAS Enterprise Administrator VERITAS Perl Redistribution VERITAS License Utilities If you want to remove them, use the uninstallinfr script. To uninstall the VERITAS infrastructure rpms # cd /opt/vrts/install #./uninstallinfr Removing License Files To remove license files 1. To see what license key files you have installed on a system, enter: # /sbin/vxlicrep The output lists the license keys and information about their respective products. 2. Go to the directory containing the license key files and list them. Enter: # cd /etc/vx/licenses/lic # ls -a 3. Using the output from step 1, identify and delete unwanted key files listed in step 2. Unwanted keys may be deleted by removing the license key file. Removing Other Configuration Files You can remove the following configuration files: /etc/vcsmmtab /etc/vxfentab /etc/vxfendg 178 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

195 Section III. Using Oracle9i on Red Hat The chapters in this section assume you have used the overview, planning, and installing chapters to install and configure Storage Foundation for Oracle RAC. See Using Storage Foundation for Oracle RAC on page 1. Procedures for using Oracle9i with Storage Foundation for Oracle RAC on Red Hat Linux are provided in this section. Chapter 11. Installing Oracle9i Software on Red Hat on page 181 Chapter 12. Configuring VCS Service Groups for Oracle 9i on Red Hat on page 203 Chapter 13. Adding and Removing Cluster Nodes Using Oracle9i on Red Hat on page 231 Chapter 14. Uninstalling Storage Foundation for Oracle RAC on Systems Using Oracle9i on page

196

197 Installing Oracle9i Software on Red Hat 11 Use this chapter to install Oracle9i software on clean Red Hat Linux systems. You can install Oracle9i binaries on shared storage or locally on each node. Storage Foundation for Oracle RAC supports Oracle9i Patchset 4 ( ) and above. Storage Foundation for Oracle RAC does not support Oracle9i on SUSE, see Reviewing the Requirements on page 22. For Oracle patchsets, see Applying Oracle9i Patchsets on Red Hat on page 199. For Oracle information, access the Oracle Technology Network. The examples in this guide primarily show the shared storage installation. The configuration of VCS service groups described in Configuring VCS Service Groups for Oracle 9i on Red Hat on page 203 and the example main.cf file shown in Sample Configuration Files on page 343 are based on using shared storage for the Oracle9i binaries. Oracle Installation Checklist After the Storage Foundation for Oracle RAC components are installed and configured, you can install the Oracle9i Release 2 software and the Oracle patchset. Installing Oracle9i includes: Creating a disk group, a shared volume, and a cluster file system, if required, for the Oracle9i SRVM component. Creating a disk group, a shared volume, and a cluster file system, if required, for the Oracle RAC software. Installing Oracle on the shared storage by running the install script installsfrac with -configure option on cluster node. Installing Oracle9i Release 2 Patchsets. You can install Oracle9i binaries on either shared disks or a local disk. 181

198 Configuring Prerequisites for Oracle9i Configuring Prerequisites for Oracle9i Before installing Oracle and creating a database: Verify that the shared disk arrays support SCSI-3 persistent reservations and I/O fencing. Refer to Configuring VCS Service Groups for Oracle 9i on Red Hat on page 203. I/O fencing is described in About I/O Fencing on page 15. Edit the file /etc/sysctl.conf and set the shared memory parameters. Refer to the Oracle9i Installation Guide. ssh or rsh must be enabled in Root user context. Make sure that all Oracle installation prerequisites are satisfied. Refer to Oracle documentation for details. Configuring Shared Disk Groups and Volumes The following procedures highlight general information that applies to creating shared disk groups. Refer to the VERITAS Volume Manager Administrator s Guide for full documentation about creating and managing shared disk groups. To determine information about a disk group You can display information about a specific disk group by entering the command: # vxdg list disk_group To check the connectivity policy on a shared disk group By default, the connectivity policy for a shared disk group is set to global, which means that when one system reports a disk failure, all nodes in the cluster detach from the disk group. This is a precaution against possible data corruption. 1. Verify that the output of the vxdg list shared_disk_group command includes the following line: detach-policy: global 2. If the connectivity policy is local for a disk group and you must set it to global, to avoid possible data corruption, use the following command: # vxedit -g shared_disk_group diskdetpolicy=global shared_disk_group 182 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

199 Configuring Prerequisites for Oracle9i To determine whether a node is the CVM master or a CVM slave 1. On one node, determine whether it is the master or the slave node by using the command: # vxdctl -c mode Expected output: or: cluster active - MASTER cluster active - SLAVE To enable write access to the volumes in the disk group 1. You must enable write access to the shared volumes so you can create databases on them. You can use the following commands on the CVM master node: # vxdg -s import shared_disk_group # vxvol -g shared_disk_group startall # vxdg -g shared_disk_group set activation=sw 2. You can use the following commands on the slave nodes: # vxdg -g shared_disk_group set activation=sw Refer to the description of disk group activation modes in the VERITAS Volume Manager Administrator s Guide for more information (see the chapter on Cluster Functionality ). Importing a shared disk group When shared disk groups under VCS control are taken offline on any or all nodes manually, they are no longer identified as shared. For example, if the disk group is manually deported from a node for maintenance purposes, the disk group must be manually reimported as shared. On any node, use the command: # vxdg -s import shared_disk_group Creating Oracle User and Group on Each System On each system, create a local group and local user for Oracle. For example, create the group dba and the user oracle. Be sure to assign the same group ID, user ID, and home directory for the user on each system. Chapter 11, Installing Oracle9i Software on Red Hat 183

200 Configuring Prerequisites for Oracle9i To create the dba group 1. Create the dba group on each system: # groupadd -g 1000 oinstall # groupadd -g 1001 dba # groupadd -g 1002 oper 2. Create the oracle user on each system and the oracle id should resemble the following: uid=100(oracle) gid=1000(oinstall) groups=1001(dba), 1002(oper) 3. Enable rsh for the oracle user on all nodes. Creating Shared Storage Location for SRVM The Oracle database requires that the Oracle SRVM component be located on the shared storage device. Oracle9i Release 2 can use a raw volume set up for this purpose. Optionally, Oracle9i Release 2 can use a cluster file system for SRVM created in this raw volume. This volume requires at least 200 megabytes. To create the disk group and the volume for SRVM For example, to create the volume srvm_vol of 300 megabytes in a shared disk group orasrv_dg on disk sdc, do the following: 1. From the master node, create the shared disk group on the shared disk sdc: # vxdg -s init orasrv_dg sdc 2. The connectivity policy for the disk group should be global. Check it by entering # vxdg list shared_disk_group Verify that the output contains the line: detach-policy: global 3. Create the volume in the shared disk group: # vxassist -g orasrv_dg make srvm_vol 300M 4. Start the volume: # vxvol -g orasrv_dg startall 184 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

201 Installing Oracle9i Release 2 Software 5. Set the ownership and permissions for the volume: # vxedit -g orasrv_dg set user=oracle group=oinstall mode=755 srvm_vol 6. Start the volume: # vxvol -g orasrv_dg startall To create a Cluster File System for SRVM (Optional) 1. On the master node, create a VxFS file system for SRVM on the shared volume created for this purpose (see previous section). For example, create the file system on srvm_vol: # mkfs -t vxfs /dev/vx/dsk/orasrv_dg/srvm_vol 2. On each system, create the mount point for the file system: # mkdir /orasrv 3. On each system, mount the file system, using the device file for the block device: # mount -t vxfs -o cluster /dev/vx/dsk/orasrv_dg/srvm_vol /orasrv 4. On one of the systems, create a file for srvm_vol that is greater than 100 MB. For example, to create a file of 200 MB, enter: # dd if=/dev/zero of=/orasrv/orasrvm bs=1m count= After mounting the file system, change the ownership: # chown -R oracle:dba /orasrv Installing Oracle9i Release 2 Software You can install the Oracle9i binaries on a shared disk or on a local hard disk on each node. Note For Oracle installation considerations, including prerequisites for installing Oracle RAC and a description of the software packages installed, refer to Oracle documentation. Chapter 11, Installing Oracle9i Software on Red Hat 185

202 Installing Oracle9i Release 2 Software Installing Oracle9i Release 2 on a Shared Disk The following procedure describes installing the Oracle9i Release 2 software on shared storage in a RAC environment. To prepare for installing Oracle9i Release 2 on a shared disk 1. Log into any system of the cluster as the root user. 2. On the master node create a shared disk group. a. Enter: # vxdg -s init orabinvol_dg sdd b. Create the volume in the shared group: # vxassist -g orabinvol_dg make orabinvol 3000M For the Oracle9i Release2 binaries, make the volume 3 GB. c. Start the volume: # vxvol -g orabinvol_dg startall d. On the master node, create a VxFS file system on the shared volume on which to install the Oracle9i binaries. For example, create the file system on orabinvol: # mkfs -t vxfs /dev/vx/dsk/orabinvol_dg/orabinvol 3. On each system, create the mount point for Oracle binaries and mount the file system. a. Create the mount point for the Oracle binaries if it does not already exist: # mkdir /oracle b. Mount the file system, using the device file for the block device: # mount -t vxfs -o cluster /dev/vx/dsk/orabinvol_dg/orabinvol /oracle 4. From the CVM master, assign ownership of the Oracle directory: # vxedit -g orabinvol_dg set user=oracle group=oinstall mode=775 orabinvol Note To determine the CVM master, execute the following on any node in the cluster: # vxdctl -c mode 186 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

203 Installing Oracle9i Release 2 Software 5. If you are installing from the CD-ROM, insert Disk1 of the Oracle9i disks into the CD-ROM drive of the node from which you are installing. If you are installing from a downloaded distribution, skip to the next step. If automount is not configured, mount the CD-ROM with the following command as root: # mount /dev/cdrom /mnt/cdrom 6. Set the X server access control: # xhost + <hostname or ip address> Where <hostname or ip address> is the hostname or IP address of the server to which you are displaying. Example: # xhost being added to access control list 7. Back up the gcc files: # mv /usr/bin/gcc /usr/bin/gcc.orig # mv /usr/bin/gcc32 /usr/bin/gcc 8. Apply Oracle patch # as root on all the cluster nodes. 9. On the first system, set the following variables in root s environment on the node from which installsfrac -configure will be executed. a. For Bourne shell (bash, sh, or ksh), enter: # export ORACLE_BASE=/oracle # export ORACLE_HOME=/oracle/orahome # export DISPLAY=host:0.0 b. For the C Shell (csh or tcsh): # setenv ORACLE_BASE /oracle # setenv ORACLE_HOME /oracle/orahome # setenv DISPLAY host:0.0 Chapter 11, Installing Oracle9i Software on Red Hat 187

204 Installing Oracle9i Release 2 Software To install Oracle9i Release 2 on a shared disk Note By default, the installsfrac utility uses ssh for remote communication. However, rsh can be used in place of ssh by using the -usersh option with the installsfrac utility. The installation of Oracle9i Release 2 requires that rsh be configured on all nodes. Refer to the Oracle9i documentation for details on configuring rsh. 1. On the same node where you have set the environment variables, execute the following command as root: # cd /opt/vrts/install #./installsfrac configure The installer will display the copyright message. 2. When the installer displays a prompt, enter the system names separated by spaces on which to configure Storage Foundation for Oracle RAC. For the installation example used in this procedure: galaxy nebula 3. Enter all the system names on which Storage Foundation for Oracle RAC was originally installed. The system list must match to /sbin/vcsmmlsnodes output. The installer checks both systems for communication and creates a log directory on the second system in /var/tmp/installsfracxxxxxxxxxx, where xxxx represents the timestamp. 4. When the initial system check is successfully completed, press Enter to continue. 5. The installer proceeds to verify the license keys. a. Enter additional licenses at this time if any are needed. b. When the licenses are successfully verified, press Enter to continue. 6. The installer presents task choices for installing and configuring, depending on the operating system you are running. 188 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

205 Installing Oracle9i Release 2 Software Example: This program enables you to perform one of the following tasks: 1) Install Oracle 9i. 2) Install Oracle 10g. 3) Relink SFRAC for Oracle 9i. 4) Relink SFRAC for Oracle 10g. 5) Configure different components of SFRAC Enter your choice [1-5]: [?] Select Install Oracle9i. The installer proceeds to check environment settings. 7. The installer continues to check environment settings. Press Enter to continue when prompted. The installer proceeds to the configuration of an Oracle UNIX user account. 8. The installer proceeds with validating environment settings. The installer checks for the existence of ORACLE_HOME and ORACLE_BASE directories, and creates them if required. The installer checks that the selected nodes have the required membership. a. Enter the Oracle UNIX user account when prompted. Example: Enter Oracle Unix User Account: [?] oracle b. The installer checks that the account exists on all the systems. The installer then checks that remote communication is enabled between all nodes of the cluster in Oracle user context. Enter the Oracle Inventory group when prompted. Example: Enter Oracle Inventory group: [?] oinstall c. The installer checks that the Oracle Inventory group exists on all the nodes of the cluster. Press Enter to continue when prompted. 9. The installer prompts for the Oracle installer path and checks if the required binaries exist. Then the installer calls the Oracle Universal Installer. Example: Oracle Binary Installer Path Enter the oracle installer path: [?] /mnt/cdrom/ Chapter 11, Installing Oracle9i Software on Red Hat 189

206 Installing Oracle9i Release 2 Software 10. Install the Oracle binaries. a. Oracle installer prompts you to run the script /tmp/orainstroot.sh. Make sure the script exists on the node where the installsfrac has been before proceeding. If so, skip to step c on page 190. b. If the script /tmp/orainstroot.sh does not exist on each node, copy the script from the first system to each of the other cluster nodes. c. Run the script /tmp/orainstroot.sh on each node as root. d. When the Node Selection dialog box appears, select the DEFAULT node for installation. e. Be sure to select the installation option: Software Only. Note If the Node Selection dialog box does not appear, refer to Missing Dialog Box During Installation of Oracle9i on page 79. f. As the installer runs, it prompts you to designate the location for the Oracle SRVM component. You can enter the name of a file, such as ora_srvm, located on the cluster file system. See To create a Cluster File System for SRVM (Optional) on page 185. For example, you would enter: Or /orasrv/ora_srvm To use the raw volume, enter /dev/vx/dsk/orasrv_dg/srvm_vol. g. When you are prompted by the installer, unmount the currently mounted CD. Log in as root. Use the umount command. For example, enter: # umount /mnt/cdrom Remove the disc. h. If you must mount another disk to continue the installation of Oracle9i: Insert the next disk in the CD-ROM drive. For mounting the CD-ROM, go to step 5 on page 187. Continue by pressing OK. To unmount the CD, see step g. 190 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

207 Installing Oracle9i Release 2 Software 11. After Oracle installation is complete, installsfrac checks if the installation was successful and takes the necessary actions. The configuration summary is saved at: /opt/vrts/install/logs/installsfracxxxxxxxxxx.summary The installsfrac log is saved at: /opt/vrts/install/logs/installsfracxxxxxxxxxx.log The installation response file is saved at: /opt/vrts/install/logs/installsfracxxxxxxxxxx.response Installing Oracle9i Release 2 Locally Use this procedure to install Oracle9i Release 2 on each system locally in a VERITAS Storage Foundation 4.1 for Oracle RAC environment. To prepare for installing Oracle9i Release 2 locally 1. Log in as root user on one system. 2. Add the directory path to the jar utility in the PATH environment variable. Typically, this is /usr/bin. 3. On one node, create a disk group. a. Enter: # vxdg init or_dg sdz b. Create the volume in the group: # vxassist -g or_dg make or_vol 5000M For the Oracle9i Release 2 binaries, make the volume 5,000 MB. c. Start the volume: # vxvol -g or_dg startall d. Create a VxFS file system on orabinvol to install the Oracle9i binaries. For example: # mkfs -t vxfs /dev/vx/dsk/or_dg/or_vol Chapter 11, Installing Oracle9i Software on Red Hat 191

208 Installing Oracle9i Release 2 Software 4. Create the mount point for the file system. a. Enter: # mkdir /oracle b. Mount the file system, using the device file for the block device: # mount -t vxfs /dev/vx/dsk/or_dg/or_vol /oracle c. To mount the file system automatically across reboot, edit the /etc/fstab file, and add the new file system. For example: /dev/vx/dsk/or_dg/or_vol /oracle vxfs defaults Create a local group and a local user for Oracle. For example, create the group oinstall and the user oracle. Be sure to assign the same user ID and group ID for the user on each system. 6. Set the home directory for the oracle user as /oracle. 7. Assign ownership of the Oracle directory to oracle: # vxedit -g orabinvol_dg set user=oracle group=oinstall mode=775 orabinvol 8. Repeat step 1 through step 7 on the other systems. 9. Set the X server access control: # xhost + <hostname or ip address> Where <hostname or ip address> is the hostname or IP address of the server to which you are displaying. Example: # xhost being added to access control list 10. On the first system, set the following variables in root s environment on the node from which installsfrac -configure will be executed. a. For Bourne shell (bash, sh, or ksh), enter: # export ORACLE_BASE=/oracle # export ORACLE_HOME=/oracle/orahome # export DISPLAY=host: VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

209 Installing Oracle9i Release 2 Software b. For the C Shell (csh or tcsh): # setenv ORACLE_BASE /oracle # setenv ORACLE_HOME /oracle/orahome # setenv DISPLAY host: Set gcc32 as the default compiler. This prevents errors that may otherwise appear during linking of xxtproc32 and isqlplus binaries. Enter: # mv /usr/bin/gcc /usr/bin/gcc.orig # mv /usr/bin/gcc 32 /usr/bin/gcc To install Oracle9i Release 2 locally Note By default, the installsfrac utility uses ssh for remote communication. However, rsh can be used in place of ssh by using the -usersh option with the installsfrac utility. 1. On the same node where you have set the environment variables, execute the following command as root: # cd /opt/vrts/install #./installsfrac configure The installer will display the copyright message. 2. When the installer prompts, enter the system names separated by spaces on which to configure Storage Foundation for Oracle RAC. For the installation example used in this procedure: galaxy nebula 3. Enter all the system names on which Storage Foundation for Oracle RAC was originally installed. The system list must match to /sbin/vcsmmlsnodes output. The installer checks both systems for communication and creates a log directory on the second system in /var/tmp/installsfracxxxxxxxxxx, where xxxx represents the timestamp. 4. When the initial system check is successfully completed, press Enter to continue. 5. The installer proceeds to verify the license keys. a. Enter additional licenses now if any are needed. b. When the licenses are successfully verified, press Enter to continue. Chapter 11, Installing Oracle9i Software on Red Hat 193

210 Installing Oracle9i Release 2 Software 6. The installer presents task choices for installing and configuring, depending on the operating system you are running. Example: This program enables you to perform one of the following tasks: 1) Install Oracle 9i. 2) Install Oracle 10g. 3) Relink SFRAC for Oracle 9i. 4) Relink SFRAC for Oracle 10g. 5) Configure different components of SFRAC Enter your choice [1-5]: [?] Select Install Oracle9i. The installer proceeds to check environment settings. 7. The installer continues to check environment settings. Press Enter to continue when prompted. The installer proceeds to the configuration of an Oracle UNIX user account. 8. The installer proceeds with validating environment settings. The installer checks for the existence of ORACLE_HOME and ORACLE_BASE directories, and creates them if required. The installer checks that the selected nodes have the required membership. a. Enter the Oracle UNIX user account when prompted. Example: Enter Oracle Unix User Account: [?] oracle b. The installer checks that the account exists on all the systems. The installer then checks that remote communication is enabled between all nodes of the cluster in Oracle user context. Enter the Oracle Inventory group when prompted. Example: Enter Oracle Inventory group: [?] oinstall c. The installer checks that the Oracle Inventory group exists on all the nodes of the cluster. Press Enter to continue when prompted. 9. The installer prompts for the Oracle installer path and checks if the required binaries exist. Then the installer calls the Oracle Universal Installer. Example: Oracle Binary Installer Path Enter the oracle installer path: [?] /mnt/cdrom/ 194 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

211 Installing Oracle9i Release 2 Software 10. Install the Oracle binaries. a. The Oracle installer prompts you to run the script /tmp/orainstroot.sh. Make sure the script exists on the node where the installsfrac has been before proceeding. If so, skip to step c on page 190. b. If the script /tmp/orainstroot.sh does not exist on each node, copy the script from the first system to each of the other cluster nodes. c. Run the script /tmp/orainstroot.sh on each node as root. d. When the Node Selection dialog box appears, select all the nodes on which Oracle binaries need to be installed. e. Be sure to select the installation option: Software Only. Refer to the Oracle9i Installation Guide for additional information about running the utility. Note If the Node Selection dialog box does not appear, refer to Missing Dialog Box During Installation of Oracle9i on page 79. f. As the installer runs, it prompts you to designate the location for the Oracle SRVM component. You can enter the name of a file, such as ora_srvm, located on the cluster file system. See To create a Cluster File System for SRVM (Optional) on page 185. For example, you would enter: /orasrv/ora_srvm Or, to use the raw volume, enter /dev/vx/dsk/orasrv_dg/srvm_vol.see the Creating Oracle User and Group on Each System on page 183. g. When you are prompted by the installer, unmount the currently mounted CD. Log in as root. Use the umount command. For example, enter: # umount /mnt/cdrom Remove the disc. Chapter 11, Installing Oracle9i Software on Red Hat 195

212 Installing Oracle9i Release 2 Software h. If you must mount another disk to continue the installation of Oracle9i: Insert the next disk in the CD-ROM drive. For mounting the CD-ROM, go to step 5 on page 187. Continue by pressing OK. To unmount the CD, see step g. 11. After Oracle installation is complete, installsfrac checks if the installation was successful and takes the necessary actions. The configuration summary is saved at: /opt/vrts/install/logs/installsfracxxxxxxxxxx.summary The installsfrac log is saved at: /opt/vrts/install/logs/installsfracxxxxxxxxxx.log The installation response file is saved at: /opt/vrts/install/logs/installsfracxxxxxxxxxx.response Installing Oracle9i Manually Typically, VERITAS recommends using installsfrac -configure to install the Oracle9i RAC binaries. However, some situations may require manual installation of the Oracle9i RAC binaries. To install the Oracle9i RAC binaries manually 1. Log in as the oracle user. # su oracle 2. Using Oracle's requirements for installing Oracle9i on RHEL30, set the LD_ASSUME_KERNEL environment variable: $ export LD_ASSUME_KERNEL= Set the ORACLE_BASE and ORACLE_HOME environment variables. For example: $ export ORACLE_BASE=/oracle $ export ORACLE_HOME=/oracle/orahome $ export LD_PRELOAD=$ORACLE_HOME/lib/libcmdll.so 4. If the ORACLE_HOME directory does not exist, create it. For example: $ mkdir -p /oracle/orahome 196 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

213 Installing Oracle9i Release 2 Software 5. Prior to launching runinstaller, the $ORACLE_HOME/lib/libcmdll.so library must exist. This library instructs the Oracle installer to perform a RAC installation. To create the $ORACLE_HOME/lib/libcmdll.so library, perform the following: $ mkdir $ORACLE_HOME/lib $ cp /usr/lib/libvcsmm.so $ORACLE_HOME/lib/libcmdll.so Note The $ORACLE_HOME/lib/libcmdll.so file must be owned by the oracle user prior to launching runinstaller. If this library is not owned by the oracle user, log in as root in another terminal and change the user ownership to the oracle user and the group ownership to the Oracle Inventory group. $ export LD_PRELOAD=$ORACLE_HOME/lib/libcmdll.so 6. Launch Oracle's runinstaller to begin the Oracle installation. 7. After installation, you must relink the VERITAS libraries. To relink VERITAS libraries After applying an Oracle patchset, the Oracle libraries must be relinked. This post-upgrade reconfiguration allows Oracle to use the VERITAS RAC libraries rather than the native Oracle libraries. This is a required step in the Oracle RAC installation process as Oracle will not function properly in an SFRAC environment without the VERITAS libraries. 1. Log in as root. # su root 2. Export the ORACLE_BASE and ORACLE_HOME environment variables. 3. Change directories to /opt/vrts/install. 4. Execute the file./installsfrac -configure. 5. The installer checks systems: a. When the installer prompts, enter the system names separated by spaces on which to configure Storage Foundation for Oracle RAC. For the installation example used in this procedure: galaxy nebula The installer checks both systems for communication and creates a log directory on the second system in /var/tmp/installsfracxxxxxxxxxx. Chapter 11, Installing Oracle9i Software on Red Hat 197

214 Installing Oracle9i Release 2 Software b. When the initial system check is successfully completed, press Enter to continue. 6. The installer proceeds to verify the license keys. a. Enter additional licenses now if any are needed. b. When the licenses are successfully verified, press Enter to continue. 7. The installer presents task choices for installing and configuring, depending on the operating system you are running. Example: This program enables you to perform one of the following tasks: 1) Install Oracle 9i. 2) Install Oracle 10g. 3) Relink SFRAC for Oracle 9i. 4) Relink SFRAC for Oracle 10g. 5) Configure different components of SFRAC Enter your choice [1-5]: [?] Select Relink SFRAC for Oracle9i. The installer proceeds to check environment settings. 8. Set Oracle directories. a. When prompted, enter the directory name for CRS_HOME relative to /oracle. It will be used relative to the ORACLE_BASE directory. b. When prompted, enter the directory name for ORACLE_HOME relative to /oracle. It will be used relative to the ORACLE_BASE directory. The Installer proceeds to validate ORACLE_HOME and check the node list. c. Press Enter to continue. 9. Set user accounts. a. Enter Oracle Unix User Account when prompted. The installer checks for the user on all systems. b. Enter Oracle Inventory group when prompted. The installer checks for group existence on all systems c. Press Enter to continue. 198 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

215 Applying Oracle9i Patchsets on Red Hat 10. The success of the configuration is reported. Press Enter to continue. The configuration summary is saved at: /opt/vrts/install/logs/installsfracxxxxxxxxxx.summary The installsfrac log is saved at: /opt/vrts/install/logs/installsfracxxxxxxxxxx.log Applying Oracle9i Patchsets on Red Hat Storage Foundation for Oracle RAC supports Oracle9i Patchset # ( ) for Linux. To apply an Oracle9i patchset 1. Log in as the oracle user. # su oracle 2. Backup the VERITAS ODM library. $ mv $Oracle_HOME/lib/libodm9.so $Oracle_HOME/lib/libodm9.so.veritas 3. Restore the Oracle ODM library. 4. Apply the patchset. Note Refer to the Oracle9i patchset README file for instructions on applying patchsets. 5. Perform post-upgrade relinking of VERITAS libraries. To relink VERITAS libraries After applying an Oracle patchset, the Oracle libraries must be relinked. This post-upgrade reconfiguration allows Oracle to use the VERITAS RAC libraries rather than the native Oracle libraries. This is a required step in the Oracle RAC installation process as Oracle will not function properly in an SFRAC environment without the VERITAS libraries. 1. Log in as root. # su - root 2. Export the ORACLE_BASE and ORACLE_HOME environment variables. Chapter 11, Installing Oracle9i Software on Red Hat 199

216 Applying Oracle9i Patchsets on Red Hat 3. Change directories: # cd /opt/vrts/install 4. Start the installer: #./installsfrac -configure 5. The installer checks systems: a. When the installer prompts, enter the system names separated by spaces on which to configure Storage Foundation for Oracle RAC. For the installation example used in this procedure: galaxy nebula The installer checks both systems for communication and creates a log directory on the second system in /var/tmp/installsfracxxxxxxxxxx. b. When the initial system check is successfully completed, press Enter to continue. 6. The installer proceeds to verify the license keys. a. Enter additional licenses now if any are needed. b. When the licenses are successfully verified, press Enter to continue. 7. The installer presents task choices for installing and configuring, depending on the operating system you are running. Example: This program enables you to perform one of the following tasks: 1) Install Oracle 9i. 2) Install Oracle 10g. 3) Relink SFRAC for Oracle 9i. 4) Relink SFRAC for Oracle 10g. 5) Configure different components of SFRAC Enter your choice [1-5]: [?] Select Relink SFRAC for Oracle9i. The installer proceeds to check environment settings. 200 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

217 Applying Oracle9i Patchsets on Red Hat 8. Configure Oracle directories. a. When prompted, enter the directory name for CRS_HOME relative to /oracle. It will be used relative to the ORACLE_BASE directory. b. When prompted, enter the directory name for ORACLE_HOME relative to /oracle. It will be used relative to the ORACLE_BASE directory. The Installer proceeds to validate ORACLE_HOME and check the node list. c. Press Enter to continue. 9. Set user accounts. a. Enter Oracle Unix User Account when prompted. The installer checks for the user on all systems. b. Enter Oracle Inventory group when prompted. The installer checks for group existence on all systems c. Press Enter to continue. 10. The success of the configuration is reported. Press Enter to continue. The configuration summary is saved at: /opt/vrts/install/logs/installsfracxxxxxxxxxx.summary The installsfrac log is saved at: /opt/vrts/install/logs/installsfracxxxxxxxxxx.log To complete the patchset installation 1. When Oracle9i installation is complete, remove the temporary rsh access permissions you have set for the systems in the cluster. For example, if you added a + character in the first line of the.rhosts file, remove that line. For the complete procedure, see the VERITAS Cluster Server 4.1 Installation Guide for Linux. 2. Check if Oracle is using the VERITAS ODM library. After starting Oracle instances, you can confirm that Oracle is using the VERITAS ODM libraries by examining the Oracle alert file, alert_$oracle_sid.log. Look for the line that reads: Oracle instance running with ODM: VERITAS xxx ODM Library, Version x.x 3. Create an Oracle database on shared storage. Use you own tools or refer to Creating a Starter Database on page 351 to create a starter database. Chapter 11, Installing Oracle9i Software on Red Hat 201

218 Applying Oracle9i Patchsets on Red Hat 202 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

219 Configuring VCS Service Groups for Oracle 9i on Red Hat 12 This chapter describes how you can set up VCS to automate the Oracle RAC environment on Red Hat Linux. It begins with a conceptual overview of how VCS manages resources within a cluster. Storage Foundation for Oracle RAC 4.1 Service Groups: An Overview The VERITAS Cluster Server (VCS) package, provided as part of the installation of Storage Foundation 4.1 for Oracle RAC, provides the ability to automate the entire RAC environment. For example, VCS can be used to automatically start the Cluster Volume Manager and Cluster File System resources within the cluster, bring up IP addresses and the Oracle Listener, mount the file systems with the Oracle binaries, mount the storage for the database instances, and start the database instance. Placing the database under VCS control does not remove the DBA's capability for full control. It simply automates actions to enable the cluster to start up after any outage. In a Storage Foundation 4.1 for Oracle RAC cluster, the administrative staff is free to choose how much, or how little, automated control they desire. Less automation means more traditional hands-on interaction, but also requires the administrator to take corrective action in more circumstances. A better idea may be to allow VCS complete startup control to take care of system and power failures and restarts, while still allowing manual control if necessary. VCS uses installed agents to manage the resources in the Storage Foundation for Oracle RAC environment. Each type of resource has an agent; for example, VCS uses a CFSMount agent for mounting shared file systems, the CVMVolDg agent for activating shared disk groups and monitoring shared volumes, an Oracle agent for starting the Oracle database, an IP agent for setting up and monitoring an IP address, and so on. The VCS configuration file (/etc/vrtsvcs/conf/config/main.cf) contains the information the agents require to manage each resource in the environment. In addition, the configuration file specifies the dependencies between resources; the dependencies that one resource has upon another sets the order in which the resources are started or stopped by the agents. 203

220 Checking cluster_database Flag in Oracle9i Parameter File Checking cluster_database Flag in Oracle9i Parameter File On each system, confirm that the cluster_database flag is set in the Oracle9i parameter file $ORACLE_HOME/dbs/init$ORACLE_SID.ora. This flag enables the Oracle instances to run in parallel. Verify the file contains the line: cluster_database = true Configuring CVM and Oracle Groups: Overview During the installation of Storage Foundation 4.1 for Oracle RAC, a rudimentary CVM service group is automatically defined in the VCS configuration file, main.cf. This CVM service group includes the CVMCluster resource, which enables Oracle9i RAC communications between the nodes of the cluster, and the CFSfsckd daemons that control the transmission of file system data within the cluster. Refer to the Example VCS Configuration File After Installation on page 63. After installing Oracle9i Release 2 and creating a database, you can modify the CVM service group to include the Oracle listener process, the IP and NIC resources used by the listener process, and the CFSMount and CVMVolDg resources. The NIC and IP resources are used by the listener process to communicate with clients via the public network. The CFSMount and CVMVolDg resources are for the Oracle binaries installed on shared storage. The modification of the CVM service group is discussed in Modifying the CVM Service Group in the main.cf File on page 206. Then you can add the Oracle database service group using the Storage Foundation for Oracle RAC wizard. This service group typically consists of an Oracle database created on shared storage, and the CFSMount and CVMVolDg resources used by the database. There may be multiple Oracle database service groups, one for each database. VCS Service Groups for RAC: Dependencies The following illustration shows the dependencies among the resources in a typical RAC environment. The illustration shows two Oracle service groups (oradb1_grp and oradb2_grp) and one CVM service group (cvm). The dependencies among the resources within a group are shown as well as the dependency that each Oracle group has on the CVM group. This chapter describes how to specify the resources. 204 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

221 Configuring CVM and Oracle Groups: Overview Dependencies for VCS Service Groups for RAC oradb1_grp oradb2_grp VRT_db rac_db Oracle Oracle CFSMount oradb1_mnt CFSMount oradb2_mnt CVMVolDg oradb1_voldg CVMVolDg oradb2_voldg cvm LISTENER Netlsnr orabin_mnt listener_ip CFSMount IP vxfsckd orabin_voldg listener_eth_0 CFSfsckd CVMVolDg NIC cvm_clus CVMCluster CVMVxconfigd Chapter 12, Configuring VCS Service Groups for Oracle 9i on Red Hat 205

222 Configuring CVM and Oracle Service Groups Manually Configuring CVM and Oracle Service Groups Manually CVM and Oracle groups can be configured manually or by using the VCS Oracle RAC Configuration Wizard. This section describes how to manually edit the main.cf file to configure the CVM and Oracle service groups. To configure CVM and Oracle service groups manually 1. Log in to one system as root. 2. Save your existing configuration to prevent any changes while you modify main.cf: # haconf -dump -makero 3. Ensure VCS is not running while you edit main.cf by using the hastop command to stop the VCS engine on all systems and leave the resources available: # hastop -all -force 4. Make a backup copy of the main.cf file: # cd /etc/vrtsvcs/conf/config # cp main.cf main.orig 5. Using vi or another text editor, edit the main.cf file, modifying the cvm service group and creating Oracle service groups using the guidelines in the following sections. Modifying the CVM Service Group in the main.cf File The cvm service group is created during the installation of Storage Foundation 4.1 for Oracle RAC. After installation, the main.cf file resembles the example shown in Example VCS Configuration File After Installation on page 63. Because Oracle9i had not been installed, the cvm service group includes only resources for the CFSfsckd daemon, the CVMCluster resource, and the CVMVxconfigd daemon. Adding Netlsnr, NIC, IP CVMVolDg, and CFSMount Resources You must modify the cvm service group to add the Netlsnr, NIC, IP, CVMVolDg, and CFSMount resources to the configuration. Refer to Sample Configuration Files on page 343 to see a complete example of how a cvm group is configured. 206 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

223 Configuring CVM and Oracle Service Groups Manually To configure Netlsnr, NIC, IP CVMVolDg, and CFSMount resources Key lines of the configuration file are shown in bold font. 1. Make sure the cvm group has the group Parallel attribute set to 1. Typically this is already done during installation.. group cvm ( SystemList = { galaxy = 0, nebula = 1 } AutoFailOver = 0 Parallel = 1 AutoStartList = { galaxy, nebula } ). 2. Define the NIC and IP resources. The VCS bundled NIC and IP agents are described in VERITAS Cluster Server Bundled Agents Reference Guide. The device name and the IP addresses are required by the listener for public network communication. Note For the IP resource, the Address attribute is localized for each node (see Local CVM and Oracle Group Attributes on page 212)... NIC listener_eth0 ( Device = eth0 ) IP listener_ip ( Device = eth0 = " " = " " NetMask = " " ) Chapter 12, Configuring VCS Service Groups for Oracle 9i on Red Hat 207

224 Configuring CVM and Oracle Service Groups Manually 3. Define the Netlsnr resource. The Netlsnr listener agent is described in detail in the VERITAS Cluster Server Enterprise Agent for Oracle, Installation and Configuration Guide. Note The listener attribute is localized for each node... Netlsnr LISTENER ( Owner = oracle Home = "/oracle/orahome" TnsAdmin = "/oracle/orahome/network/admin" MonScript = "./bin/netlsnr/lsnrtest.pl" = LISTENER_a = LISTENER_b EnvFile = "/opt/vrtsvcs/bin/netlsnr/envfile" ) 4. You must configure the CVMVolDg and CFSMount resources in the cvm group for the Oracle binaries installed on shared storage. Refer to the appendix Using Agents on page 361 for description of CVMVolDg and CFSMount agents.... CVMVolDg orabin_voldg ( CVMDiskGroup = orabindg CVMVolume = { "orabinvol", "srvmvol" } CVMActivation = sw ) CFSMount orabin_mnt ( Critical = 0 MountPoint = "/oracle" BlockDevice = "/dev/vx/dsk/orabindg/orabinvol" Primary = galaxy ) 5. Define the dependencies of resources in the group. The dependencies are specified such that the Netlsnr resource requires the IP resource that, in turn, depends on the NIC resource. The Netlsnr resource also requires the CFSMount resource. The CFSMount resource requires the daemons and vxfsckd used by the cluster file system. The CFSMount resource also depends on the CVMVolDg resource, which, in turn, requires the CVMCluster resource for communications within the cluster. 208 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

225 Configuring CVM and Oracle Service Groups Manually The VCS Service Groups for RAC: Dependencies on page 204 visually shows the dependencies specified in the following statements:.... orabin_voldg requires cvm_clus orabin_mnt requires vxfsckd orabin_mnt requires orabin_voldg listener_ip requires listener_eth0 LISTENER requires listener_ip LISTENER requires orabin_mnt Adding Oracle Resource Groups in the main.cf File For a complete description of the VCS Oracle enterprise agent, refer to the document, VERITAS Cluster Server Enterprise Agent for Oracle, Installation and Configuration Guide. That document includes instructions for configuring the Oracle and Netlsnr agents. Note The VCS Enterprise Agent for Oracle version 4.1 is installed when you run installsfrac. When you refer to the VERITAS Cluster Server Enterprise Agent for Oracle Installation and Configuration Guide, ignore the steps described in the section Installing the Agent Software. To add Oracle resource groups in the main.cf file 1. Using the Sample Configuration Files on page 343 as an example, add a service group to contain the resources for an Oracle database. For example, add the group oradb1_grp. Make sure you assign the Parallel attribute a value of 1.. group oradb1_grp ( SystemList = { galaxy = 0, nebula = 1 } AutoFailOver = 1 Parallel = 1 AutoStartList = { galaxy, nebula } ).. Chapter 12, Configuring VCS Service Groups for Oracle 9i on Red Hat 209

226 Configuring CVM and Oracle Service Groups Manually 2. Create the CVMVolDg and CFSMount resource definitions. See Using Agents on page 361 for a description of these agents and their attributes... CVMVolDg oradb1_voldg ( CVMDiskGroup = oradb1dg CVMVolume = { "oradb1vol" } CVMActivation = sw ) CFSMount oradb1_mnt ( MountPoint = "/oradb1" BlockDevice = "/dev/vx/dsk/oradb1dg/oradb1vol" Primary = galaxy.. ) 3. Define the Oracle database resource. Refer to the VERITAS Cluster Server Enterprise Agent for Oracle, Installation and Configuration Guide for information on the VCS enterprise agent for Oracle. Note Oracle attributes Sid, Pfile, and Table must be set locally, that is, they must be defined for each cluster system... Oracle VRT_db ( = VRT1 = VRT2 Owner = oracle Home = "/oracle/orahome" = "/oracle/orahome/dbs/initvrt1.ora" = "/oracle/orahome/dbs/initvrt2.ora" User = "scott" Pword = tiger = vcstable_galaxy = vcstable_nebula MonScript = "./bin/oracle/sqltest.pl" AutoEndBkup = 1 EnvFile = "/opt/vrtsvcs/bin/oracle/envfile" ) 210 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

227 Configuring CVM and Oracle Service Groups Manually 4. Define the dependencies for the Oracle service group. Note that the Oracle database group is specified to require the cvm group, and that the required dependency is defined as online local firm, meaning that the cvm group must be online and remain online on a system before the Oracle group can come online on the same system. Refer to the VERITAS Cluster Server User s Guide for a description of group dependencies. Refer to VCS Service Groups for RAC: Dependencies on page requires group cvm online local firm oradb1_mnt requires oradb1_voldg VRT_db requires oradb1_mnt See the Sample Configuration Files on page 343 for a complete example. You can also find the complete file in /etc/vrtsvcs/conf/sample_rac/main.cf. Additional RAC Processes Monitored by the VCS Oracle Agent For shallow monitoring, the VCS Oracle agent monitors the ora_lmon and ora_lmd processes in addition to the ora_dbw, ora_smon, ora_pmon, and ora_lgwr processes. Saving and Checking the Configuration When you finish configuring the CVM and Oracle service groups by editing the main.cf file, verify the new configuration. To save and check the configuration 1. Save and close the main.cf file. 2. Verify the syntax of the file /etc/vrtsvcs/conf/config/main.cf: # cd /etc/vrtsvcs/conf/config # hacf -verify. 3. Start the VCS engine on one system: # hastart 4. Type the command hastatus: # hastatus Chapter 12, Configuring VCS Service Groups for Oracle 9i on Red Hat 211

228 Configuring CVM and Oracle Service Groups Manually 5. When LOCAL_BUILD is listed in the message column, start VCS on the other system: # hastart 6. Verify that the service group resources are brought online. On one system, enter: # hagrp -display Local CVM and Oracle Group Attributes The following table lists attributes that must be defined as local for the CVM and Oracle service groups (note that each attribute has string-scalar as the type and dimension). Resource Attribute Definition IP Address The virtual IP address (not the base IP address) associated with the interface. For example: = " " = " " Netlsnr Listener The name of the Listener. For example: = LISTENER_a = LISTENER_b Oracle Sid The variable $ORACLE_SID represents the Oracle system ID. For example, if the SIDs for two systems, sysa and sysb, are VRT1 and VRT2 respectively, their definitions would be: = VRT1 = VRT2 Oracle Pfile The parameter file: $ORACLE_HOME/dbs/init$ORACLE_SID.ora. For example: = "/oracle/vrt/dbs/initvrt1.ora" = "/oracle/vrt/dbs/initvrt2.ora" 212 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

229 Location of VCS and Oracle Agent Log Files Resource Attribute Definition Oracle Table The table used for in-depth monitoring by User/PWord on each cluster node. For example: = vcstable_sysa = vcstable_sysb Using the same table on all Oracle instances is not recommended. Using the same table generates IPC traffic and could cause conflicts between the Oracle recovery processes and the agent monitoring processes in accessing the table. Note Table is only required if in-depth monitoring is used. If the PWord varies by RAC instance, it must also be defined as local. If other attributes for the Oracle resource differ for various RAC instances, define them locally as well. These attributes may include the Oracle resource attributes User, PWord, the CVMVolDg resource attribute CVMActivation, and others. Modifying the VCS Configuration For additional information and instructions on modifying the VCS configuration by editing the main.cf file, refer to the VERITAS Cluster Server User s Guide. Location of VCS and Oracle Agent Log Files On all cluster nodes, look at the following log files for any errors or status messages: /var/vrtsvcs/log/engine_a.log /var/vrtsvcs/log/oracle_a.log /var/vrtsvcs/log/netlsnr_a.log When large amounts of data are written, multiple log files may be required. For example, engine_b.log, engine_c.log, and so on, may be required. The engine_a.log contains the most recent data. Configuring the Service Groups Using the Wizard You can use the configuration wizard to configure the VCS service groups for Storage Foundation for Oracle RAC environment. The wizard enables you to create the service group for Oracle and modify the CVM service group. Chapter 12, Configuring VCS Service Groups for Oracle 9i on Red Hat 213

230 Configuring the Service Groups Using the Wizard Using the Wizard to Configure CVM and Oracle Service Groups The Oracle9i RAC configuration wizard guides you through the creation of an Oracle service group and the definition of the Oracle, CFSMount, and CVMVolDg resources. It adds the Netlsnr resources to the existing CVM group. If the listeners use the virtual IP, the wizard also adds the IP and NIC resources to the CVM group. The wizard configures the Oracle service group to depend on the CVM group with an online-local-firm dependency. Before Starting the Wizard Before starting the Wizard, you can verify that your Oracle installation can be configured. Review the requirements listed below. Also, you need to provide the wizard information as it proceeds. Make sure you have that information at hand. Prerequisites Oracle RAC instances and listeners must be running on all cluster nodes. The database files of all instances must be on shared storage, that is, in a cluster file system. Each Oracle instance must be associated with a listener. The listener may be configured to listen to either the base IP or a virtual IP. The IP addresses and host names specified in the files listener.ora and tnsnames.ora must be the same. If detail monitoring is to be used for a database instance, the table used for detail monitoring must be set up, with user and password assigned. Information You Will Need The names of the database instances to be configured The information required for the detail monitoring configuration The location of the pfile for each instance 214 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

231 Establishing Graphical Access for the Wizard Configuring the Service Groups Using the Wizard The configuration wizard requires graphical access to the VCS systems where you want to configure service groups. If your VCS systems do not have monitors, or if you want to run the wizards from a remote UNIX system, do the following: To establish graphical access from a remote system 1. From the remote system, (jupiter, for example), run: # xhost + 2. Do one of the following, depending on your shell: For Bourne shell (bash, sh, or ksh), run this step on one of the systems where the wizard is to run, for example galaxy: # export DISPLAY=jupiter:0.0 For C shell (csh), run this step # setenv DISPLAY jupiter: Verify that the DISPLAY environment variable has been updated: # echo $DISPLAY jupiter:0.0 To set up pre-oracle configuration for the wizard 1. Become oracle user: # su - oracle 2. Start up the listeners on all cluster nodes: # $ORACLE_HOME/bin/lsnrctl start LISTENER_X 3. Start up the Oracle RAC instances on each node. For example: # export ORACLE_SID=rac1 # sqlplus "/ as sysdba" sqlplus> startup 4. Set the TNS_NAMES parameter if the listeners.ora and tnsnames.ora files are not under the default directory. Chapter 12, Configuring VCS Service Groups for Oracle 9i on Red Hat 215

232 Configuring the Service Groups Using the Wizard Starting the Configuration Wizard The configuration wizard for Oracle9i RAC is started at the command line. 1. Log on to one of your VCS systems as root. 2. Start the configuration wizard. # /opt/vrtsvcs/bin/hawizard rac The Welcome Window The RAC wizard starts with a Welcome window that highlights the prerequisites for configuration and the information you will need to complete the configuration. If your configuration does not meet the configuration requirements, you can stop the wizard by pressing Cancel. Take the necessary steps to meet the requirements and start the wizard again (see Starting the Configuration Wizard above). 216 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

233 Configuring the Service Groups Using the Wizard The Wizard Discovers the RAC Configuration If you are ready to configure Oracle service group, press Next on the Welcome screen. The wizard begins discovering the current Oracle RAC information before proceeding with the next screen. If the wizard does not find all databases and listeners running on all systems in the cluster, it halts with an error, indicating the problem. Press Cancel, and start the wizard again after you correct the problem. The Wizard Options Screen The first configuration screen presents options to Create RAC service group or Modify RAC service group. If you are using the Wizard to modify a service group, see Modifying the Oracle9i RAC Service Group Configuration on page 229. Chapter 12, Configuring VCS Service Groups for Oracle 9i on Red Hat 217

234 Configuring the Service Groups Using the Wizard Creating a RAC Service Group To create an Oracle RAC service group, click the corresponding button and provide a name for the Oracle service group. Guidelines for naming an Oracle service group are available by clicking Help (? ). After entering a service group name, click Next. 218 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

235 Configuring the Service Groups Using the Wizard Database Selection Screen The databases and their instances running on the cluster are listed on this screen. Highlight only one of the databases, if more than one are listed, and click Next. Chapter 12, Configuring VCS Service Groups for Oracle 9i on Red Hat 219

236 Configuring the Service Groups Using the Wizard Instance Configuration Screen Configure the basic database instance information on the Instance Configuration screen. Confirming Basic Database Instance Information For each database instance discovered, basic configuration information is displayed. If necessary, double click in a field to select and edit its contents. Instance name: Each instance is listed in the left hand column. Oracle Parameter File (Pfile): The file that is used to start Oracle. The default location for a given instance is listed. Edit the information if necessary. Startup Options Accept the displayed STARTUP option, or select an option from the drop-down menu. The startup options include starting in RESTRICTED, RECOVERDB, SRVCTLSTART, or CUSTOM modes. Stop Options Accept the displayed IMMEDIATE option, or select an option from the drop-down menu. The stop options also include TRANSACTIONAL, SRVCTLSTOP, and CUSTOM. 220 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

237 Configuring the Service Groups Using the Wizard Checkbox for Choosing Detailed Monitoring On the Instance Configuration Screen, you can choose to enable Detail Monitoring. If you check Enable Detail Monitoring, be sure you have previously set up the database table, user, and password for the agent to use during monitoring. See Detail Monitoring Screen - Oracle Resource. If you are not set up for detail monitoring, do not select it. Checkbox Choosing to Specify Advanced Options On the Instance Configuration Screen, you can choose to Specify Advanced Options. The advanced options include setting up an EnvFile (to define environment variables), Encoding, and an AutoEndBkup parameter. See Oracle Advanced Configuration Screen. Detail Monitoring Screen - Oracle Resource This screen is displayed if you have checked Detail Monitoring at the bottom of the Instance Configuration Screen. For each database instance identified by its Sid, this screen displays fields for defining the attributes that enable detail monitoring of the Oracle database resource. You do not have to enable detail monitoring on all instances, but for each instance you check, all of the fields are required: User: Oracle user, which the Oracle agent uses to log on to monitor the health of the database. Chapter 12, Configuring VCS Service Groups for Oracle 9i on Red Hat 221

238 Configuring the Service Groups Using the Wizard Password: Password for the Oracle user. Table: Name of the database table to be used by the Oracle agent monitor. Oracle Advanced Configuration Screen This screen is displayed if you have checked Specify Advance Options at the bottom of the Instance Configuration Screen. For each database instance identified by its Sid, this screen displays fields for configuring the advanced attributes of the Oracle service group. You may select which database instance you want to configure advance attributes for, and which attributes you want to define. Advanced attributes include: EnvFile: the source file used by the agent entry point scripts. Encoding: the operating system encoding that corresponds to Oracle encoding for the displayed Oracle output; the encoding value must match the encoding value used by the Netlsnr configuration. AutoEndBkup: specifies that datafiles in the database are taken out of the backup mode when instance is onlined. See the VERITAS Cluster Server Enterprise Agent for Oracle Installation and Configuration Guide for a complete description of the EnvFile, Encoding, and AutoEndBkup attributes. 222 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

239 Configuring the Service Groups Using the Wizard Database Configuration Screen If you have installed the database on a cluster file system, the wizard discovers the mount point and displays it on the Database Configuration screen. You can confirm the mount options displayed, or you can modify them. If the database exists on raw volumes, the wizard discovers the volumes. Chapter 12, Configuring VCS Service Groups for Oracle 9i on Red Hat 223

240 Configuring the Service Groups Using the Wizard Listener Configuration Screen The Listener Configuration screen displays the name of the listener corresponding to each database instance, as well as the IP address and device name used by each listener. Typically, you cannot change this information, only verify it. Checkboxes: Detail Monitoring, Advanced Listener Options You can choose to configure detail monitoring for the Netlsnr agent by clicking the Enable detail monitoring checkbox. The wizard uses the monitor script /opt/vrtsvcs/bin/netlsnr/lsnrtest.p1 to monitor the listeners in detail. You can also choose to Specify Advanced options, which include setting up an EnvFile (to define environment variables), Encoding, and LsnrPwd parameters. See Listener Advanced Configuration Screen on page 225 that follows. Using the Host IP Address If you have set up the listener to use the base, or host, IP address, the wizard displays a message when you press Next on the Listener Configuration screen. 224 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

241 Configuring the Service Groups Using the Wizard Listener Advanced Configuration Screen This screen displays if you have checked Specify Advanced options at the bottom of the Listener Configuration Screen. For each listener identified by name, this screen displays fields for defining the advanced attributes of the Netlsnr resource. Netlsnr EnvFile: the name of the source file used by the agent entry point scripts; this file must exist. Netlsnr Encoding: the operating system encoding that corresponds to Oracle encoding for the displayed Oracle output; the encoding value must match the encoding value used by the Oracle configuration (see Oracle Advanced Configuration Screen on page 222). Listener Password: the password used for Netlsnr; must specify the password as it appears in the listener.ora file. Chapter 12, Configuring VCS Service Groups for Oracle 9i on Red Hat 225

242 Configuring the Service Groups Using the Wizard Service Group Summary Screens After you have configured the database and listener resources, the wizard displays the configuration on a Summary screen. 226 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

243 Configuring the Service Groups Using the Wizard You can click on a resource within the service group to highlight it and display its attributes and their values. For example, if you click on the name of the Oracle resource, Ora-racj1, the wizard displays details of the Oracle resource, as the following illustration shows. Chapter 12, Configuring VCS Service Groups for Oracle 9i on Red Hat 227

244 Configuring the Service Groups Using the Wizard The next illustration shows the attributes for the CFSMount resource. Note the dependencies listed at the bottom of the Attributes screen. The NetLsnr resource is configured as part of the CVM service group. The CVM service group also contains other resources, which may not be displayed by the wizard because the wizard does not control them. 228 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

245 Configuring the Service Groups Using the Wizard Implementing the Configuration The wizard implements the configuration changes when you click Finish. The wizard creates the Oracle service group, and adds the Netlsnr resource to the CVM configuration. Modifying Oracle RAC Service Groups Using the Wizard Once an Oracle RAC service group is created on a system, the configuration wizard can be used to modify the service group s Oracle, Netlsnr, and CVM components. Note If modification of underlying mount point or volume information is necessary, the mount points or volumes must be deleted and added in the Oracle database before the wizard is started. Then, the wizard discovers the new information. Prerequisites To modify some resources, you must make the changes in the Oracle database before you start the RAC configuration wizard. When you start the wizard, it discovers the the new information. This applies to: Adding or removing the database mount points Adding or removing shared volumes To modify network resources, make sure that the service group is offline. To add or remove database files from your configuration, make sure that the service group is online. Modifying the Oracle9i RAC Service Group Configuration 1. Start the RAC configuration wizard as root on the VCS system: # /opt/vrtsvcs/bin/hawizard rac 2. On the Welcome window, click Next. 3. In the Wizard Options window, select the Modify RAC service group option, select the service group to be modified, and click Next. 4. Follow the wizard instructions and make modifications according to your configuration. See page 214 for more information about the configuration wizard. Chapter 12, Configuring VCS Service Groups for Oracle 9i on Red Hat 229

246 Configuring the Service Groups Using the Wizard 230 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

247 Adding and Removing Cluster Nodes Using Oracle9i on Red Hat 13 A cluster running VERITAS Storage Foundation for Oracle RAC can have as many as eight systems. If you have a multi-system cluster running Oracle9i on Red Hat Linux, and want to add or remove a system, use the procedures described in this chapter. Adding a Node to an Oracle9i Red Hat Cluster The examples in these procedures describes adding one node to a two-system cluster. Checking System Requirements for New Node on page 232 Physically Adding a New System to the Cluster on page 232 Installing Storage Foundation for Oracle RAC on the New System on page 232 Configuring LLT, GAB, VCSMM, and VXFEN Drivers on page 235 Setting up Oracle on page 238 Setting up Oracle for Shared Storage on page 240 Configuring the New Oracle Instance on page

248 Adding a Node to an Oracle9i Red Hat Cluster Checking System Requirements for New Node Make sure that the new systems you add to the cluster meet all of the requirements for installing and using Storage Foundation for Oracle RAC. The new system must have the identical operating system and patch level as the existing systems. Refer to Reviewing the Requirements on page 22. Make sure you use a text window of 80 columns minimum by 24 lines minimum; 80 columns by 24 lines is the recommended size for the optimum display of the installsfrac script. Make sure that the file /etc/fstab contains only valid entries, each of which specifies a file system that can be mounted. Physically Adding a New System to the Cluster The new system must have the identical operating system and patch level as the existing systems. When you physically add the new system to the cluster, it must have private network connections to two independent switches used by the cluster and be connected to the same shared storage devices as the existing nodes. Refer to the VERITAS Cluster Server Installation Guide. After installing Storage Foundation for Oracle RAC on the new system and starting VxVM, the new system can access the same shared storage devices. It s important that the shared storage devices, including coordinator disks, are exactly the same among all nodes. If the new node does not see the same disks as the existing nodes, it will be unable to join the cluster, as indicated by an error from CVM on the console. Installing Storage Foundation for Oracle RAC on the New System Check system readiness before installing Storage Foundation for Oracle RAC. To check the new system for installation readiness 1. Log in to the new system as root. For example, the new system is saturn. 2. Insert the VERITAS software disc containing the product into the new system s CD-ROM drive. The Linux volume-management utility automatically mounts the CD. 232 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

249 Adding a Node to an Oracle9i Red Hat Cluster 3. Change to the directory containing the installsfrac program: # /mnt/cdrom/rhel3_i686/storage_foundation_for_oracle_rac 4. Run the installsfrac script with the precheck option to verify that the current operating system level, patch level, licenses, and disk space are adequate to enable a successful installation: #./installsfrac -precheck saturn The utility s precheck function proceeds non-interactively. When complete, the utility displays the results of the check and saves the results of the check in a log file. 5. If the check is successful, proceed to run installsfrac with the -installonly option. If the precheck function indicates licensing is required, you may proceed and add the license when running the installation utility. Otherwise, the precheck function indicates required actions. To install Storage Foundation for Oracle RAC without configuration 1. On the new system, use the -installonly option, which enables you to install Storage Foundation for Oracle RAC on the new system without performing configuration. The configuration from the existing cluster systems is to be used. #./installsfrac -installonly After presenting a copyright message, the script prompts for the name of the system and performs initial checks. 2. The script displays messages and prompts you to start installation if you are ready to continue. Answer Y. 3. As the installation proceeds, it checks for the presence of infrastructure component packages, VRTScpi and VRTSvlic, and installs them if they are not present. 4. The installer prompts you for a license, if one is not present. Checking SFRAC license key on saturn... not licensed Enter a SFRAC license key for saturn: [?] XXXX-XXXX-XXXX-XXXX-XX Registering VERITAS Storage Foundation for Oracle RAC PERMANENT key on saturn Do you want to enter another license key for saturn? [y,n,q,?] (n) SFRAC licensing completed successfully. 5. The installer lists the packages and patches it is to install and checks whether any of them are present on the system. Chapter 13, Adding and Removing Cluster Nodes Using Oracle9i on Red Hat 233

250 Adding a Node to an Oracle9i Red Hat Cluster 6. After installsfrac installs the packages and patches, it summarizes the installation with the following messages: Installation of Storage Foundation for Oracle RAC 4.1 has completed successfully. The installation summary is saved at: /opt/vrts/install/logs/installsfracdate_time.summary The installsfrac log is saved at: /opt/vrts/install/logs/installsfracdate_time.log The installation response file is saved at: /opt/vrts/install/logs/installsfracdate_time.response Storage Foundation for Oracle RAC cannot be started without configuration. Run installsfrac -configure when you are ready to configure SFRAC. 7. Insert the product disk. Change directory to the installer. # cd /cdrom/rhel3_i686 # cd./installmp - installonly Note Ignore the message advising that you must run installsfrac -configure. When adding a node to a cluster running Storage Foundation for Oracle RAC, you must manually configure the system using the following procedure. To run the Installer 1. To start VERITAS Volume Manager on the new node, use the vxinstall utility. As you run the utility, answer N to prompts about licensing because you installed the appropriate license when you ran the installsfrac utility # vxinstall VxVM uses license keys to control access. If you have a SPARCstorage Array (SSA) controller or a Sun Enterprise Network Array (SENA) controller attached to your system, then VxVM will grant you a limited use license automatically. The SSA and/or SENA license grants you unrestricted use of disks attached to an SSA or SENA controller, but disallows striping, RAID-5, and DMP on non-ssa and non-sena disks. If you are not running an SSA or ENA controller, then you must obtain a license key to operate. Licensing information: Some licenses are already installed. Do you wish to review them [y,n,q] (default: y) n Do you wish to enter another license key [y,n,q] (default: n) 234 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

251 Adding a Node to an Oracle9i Red Hat Cluster 2. Answer N when prompted to select enclosure-based naming for all disks. Do you want to use enclosure based names for all disks? [y,n,q,?] (default: n) Sep 24 14:22:06 thor181 vxdmp: NOTICE: VxVM vxdmp V added disk array DISKS, datype = Disk Starting the cache deamon, vxcached. 3. Answer N to setup a disk group for the system. Do you want to setup a system wide default disk group? [y,n,q,?] (default: y) n The installation is successfully completed. 4. Verify that the daemons are up and running. When you enter the command: # vxdisk list The output should display the shared disks without errors. Configuring LLT, GAB, VCSMM, and VXFEN Drivers To configure LLT, GAB, VCSMM, and VxFEN drivers 1. On the new system, modify the file /etc/sysctl.conf to set the shared memory parameter; refer to the Oracle9i Installation Guide for details. The value of the shared memory parameter is put to effect when the system restarts. 2. Edit the file /etc/llthosts on the two existing systems. Using vi or another text editor, add the line for the new node to the file. The file should resemble: 0 galaxy 1 nebula 2 saturn 3. Copy the /etc/llthosts file from one of the existing systems over to the new system. The /etc/llthosts file must be identical on all nodes in the cluster. 4. Create an /etc/llttab file on the new system. For example: set-node saturn set-cluster 7 link eth0 eth0 - ether -- link eth1 eth1 - ether -- Except for the first line that refers to the system, the file resembles the /etc/llttab files on the other two nodes. The second line must be the same on all of the nodes. Chapter 13, Adding and Removing Cluster Nodes Using Oracle9i on Red Hat 235

252 Adding a Node to an Oracle9i Red Hat Cluster 5. Use vi or another text editor to create the file /etc/gabtab on the new system. It should resemble the following example: /sbin/gabconfig -c -nn Where N represents the number of systems in the cluster. For a three-system cluster, N would equal Edit the /etc/gabtab file on each of the existing systems, changing the content to match the file on the new system. 7. Set up the /etc/vcsmmtab and /etc/vxfendg files on the new system by copying them from one of the other existing nodes: # scp galaxy:/etc/vcsmmtab /etc # scp galaxy:/etc/vxfendg /etc 8. Run the following commands to start LLT and GAB on the new node: # /sbin/service llt start # /sbin/service gab start 9. On the new node, start the VXFEN, VCSMM, and LMX drivers. Use the following commands in the order shown: # /sbin/service vxfen start # /sbin/service vcsmm start # /sbin/service lmx start 10. On the new node, start the GMS and ODM drivers. Use the following commands in the order shown: # /sbin/service vxgms start # /sbin/service vxodm start 11. On the new node, verify that the GAB port memberships are a, b, d, and o. Run the command: # /sbin/gabconfig -a GAB Port Memberships 236 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

253 Adding a Node to an Oracle9i Red Hat Cluster Configuring CVM You can modify the VCS configuration using one of three possible methods. You can edit /etc/vrtsvcs/conf/config/main.cf (the VCS configuration file) directly, you can use the VCS GUI (Cluster Manager), or you can use the command line, as illustrated in the following example. Please refer to the VERITAS Cluster Server User s Guide for details about how to configure VCS. To configure CVM 1. On one of the existing nodes, check the groups dependent on the CVM service group: # hagrp -dep cvm Parent Child Relationship oradb1_grp cvm online local firm oradb2_grp cvm online local firm 2. Make the VCS configuration writable: # haconf -makerw 3. Add the new system to the cluster: # hasys -add saturn 4. If the ClusterService service group is configured, add the new system to its system list and specify a failover priority of 2: # hagrp -modify ClusterService SystemList -add saturn 2 5. If the ClusterService service group is configured, add the new system to the service group s AutoStartList: # hagrp -modify ClusterService AutoStartList galaxy nebula saturn 6. Add the new system to the cvm service group system list and specify a failover priority of 2: # hagrp -modify cvm SystemList -add saturn 2 7. Add the new system to the cvm service group AutoStartList: # hagrp -modify cvm AutoStartList galaxy nebula saturn 8. Add the new system and its node ID (refer to the /etc/llthosts changes in step 2 on page 235) to the cvm_cluster resource: # hares -modify cvm_clus CVMNodeId -add saturn 2 Chapter 13, Adding and Removing Cluster Nodes Using Oracle9i on Red Hat 237

254 Adding a Node to an Oracle9i Red Hat Cluster 9. If the IP resource is not part of the cvm service group, skip to the next step. If the IP is part of the cvm service group, add the new system to the IP resource: # hares -modify listener_ip Address sys saturn 10. If the listener name is the default, skip this step. Otherwise, add the local listener name to the Netlsnr resource: # hares -modify LISTENER Listener listener_saturn -sys saturn 11. Save the new configuration to disk: # haconf -dump -makero 12. On each of the existing nodes, run the following command to enable them to recognize the new node: # /etc/vx/bin/vxclustadm -m vcs -t gab reinit Setting up Oracle To set up the new system for Oracle 1. On the new cluster system, create a local group and local user for Oracle. a. Assign the same group ID, user ID, and home directory as exist on the two current cluster systems. For example, enter: # groupadd -g 1000 dba # useradd -g dba -u d /oracle oracle b. Create a password for the user oracle: # passwd oracle 2. If Oracle system binaries are installed on shared storage on the existing systems, skip to step t. If Oracle is installed locally on the two existing systems, install Oracle on the new system s local disk in the same location as it is installed on the existing systems. a. Refer to Oracle metalink document ID # and follow the instructions provided for adding a node to an Oracle9i cluster. b. Relink the Oracle binary to the VERITAS libraries. 238 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

255 Adding a Node to an Oracle9i Red Hat Cluster To relink the VERITAS libraries 1. Log in as root. 2. Export the ORACLE_BASE and ORACLE_HOME environment variables. 3. Change directories to /opt/vrts/install. 4. Execute the file./installsfrac -configure. 5. The installer checks systems: a. When the installer prompts, enter the system names separated by spaces on which to configure Storage Foundation for Oracle RAC. For the installation example used in this procedure: galaxy nebula The installer checks both systems for communication and creates a log directory on the second system in /var/tmp/installsfracxxxxxxxxxx. b. When the initial system check is successfully completed, press Enter to continue. 6. The installer proceeds to verify the license keys. a. Enter additional licenses now if any are needed. b. When the licenses are successfully verified, press Enter to continue. 7. The installer presents task choices for installing and configuring, depending on the operating system you are running. Example: This program enables you to perform one of the following tasks: 1) Install Oracle 9i. 2) Install Oracle 10g. 3) Relink SFRAC for Oracle 9i. 4) Relink SFRAC for Oracle 10g. 5) Configure different components of SFRAC Enter your choice [1-5]: [?] Select Relink SFRAC for Oracle9i. The installer proceeds to check environment settings. Chapter 13, Adding and Removing Cluster Nodes Using Oracle9i on Red Hat 239

256 Adding a Node to an Oracle9i Red Hat Cluster 8. Configure Oracle directories. a. When prompted, enter the directory name for CRS_HOME relative to /oracle. It will be used relative to the ORACLE_BASE directory. b. When prompted, enter the directory name for ORACLE_HOME relative to /oracle. It will be used relative to the ORACLE_BASE directory. The Installer proceeds to validate ORACLE_HOME and check the node list. c. Press Enter to continue. 9. Set user accounts. a. Enter Oracle Unix User Account when prompted. The installer checks for the user on all systems. b. Enter Oracle Inventory group when prompted. The installer checks for group existence on all systems c. Press Enter to continue. 10. The success of the configuration is reported. Press Enter to continue. The configuration summary is saved at: /opt/vrts/install/logs/installsfracxxxxxxxxxx.summary The installsfrac log is saved at: /opt/vrts/install/logs/installsfracxxxxxxxxxx.log Setting up Oracle for Shared Storage To set up the new system for shared storage 1. If Oracle system binaries are installed on a clustered file system, set up the new system for Oracle. a. Create a mount point for the shared file system on the new system; this mount point should have the same name as is used by the shared file system on the existing cluster nodes: # mkdir /oracle b. Mount the shared file system on the new node. # mount -t vxfs -o cluster /dev/vx/dsk/or_dg/or_vol /oracle 240 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

257 Adding a Node to an Oracle9i Red Hat Cluster 2. On one of the existing nodes, run the following command to ensure the new node is recognized by CVM: # /etc/vx/bin/vxclustadm nidmap Name CVM Nid CM Nid State galaxy 0 0 Joined: Slave nebula 1 1 Joined: Master saturn 2 2 Joined: Slave Configuring the New Oracle Instance To configure the new Oracle instance 1. On an existing system, add a new instance. Refer to the Oracle9i Installation Guide. Highlights of the steps to add a new instance include: Log in as the user oracle and connect to the instance. Create a new undotbs tablespace for the new instance; for example, if the tablespace is for the third instance, name it undotbs3. If the database uses raw volumes, create the volume first. Use the same size used by the existing undotbs volumes. Create two new redo log groups for the new instance; for example, if the tablespace is for the third instance, create the tablespaces redo3_1 and redo3_2. If the database uses raw volumes, create the volume for the redo logs first. Use the size used by the existing redo volumes. Enable thread 3 where 3 can be the number of the new instance. Prepare the init{sid}.ora file for the new instance on the new node. If the Oracle is installed locally on the new system, prepare the directories bdump, cdump, udump, and pfile. 2. If you use in-depth monitoring for the database, create the table for the database instance. Create the table on the new system. Refer to the VERITAS Cluster Server Enterprise Agent for Oracle, Installation and Configuration Guide for instructions on creating the table. Chapter 13, Adding and Removing Cluster Nodes Using Oracle9i on Red Hat 241

258 Adding a Node to an Oracle9i Red Hat Cluster 3. Configure the ODM port on the new node. a. Unmount the ODM directory to unconfigure port d. ODM is automatically mounted on the new system during installation, but not linked. # umount /dev/odm b. Mount the ODM directory. Re-mounting the ODM directory configures port d and re-links the ODM libraries with SFO-RAC: # mount /dev/odm 4. Create a mount point directory for the database file system. For example: # mkdir /oradb1 5. Set the permissions for the mount point: # chown oracle:dba /oradb1 6. Mount the file system with the database on the new system: # mount -t vxfs -o cluster /dev/vx/dsk/oradbdg1/oradb1vol /oradb1 7. Log in as user oracle and attempt to manually start the new instance; the following example is for a third system: $ export ORACLE_SID=rac3 $ sqlplus "/ as sysdba" sqlplus> startup pfile=/oracle/orahome/dbs/initrac3.ora 8. After the new Oracle instance is brought up manually on the new system, place the instance under VCS control. a. Add the new system to the SystemList. For example, where the existing nodes, galaxy and nebula are nodes 0 and 1, saturn, the new node, would be node 2: # haconf -makerw # hagrp -modify oradb1_grp SystemList -add saturn 2 b. Add the new system to the AutoStartList for oradb1_grp: # hagrp -modify oradb1_grp AutoStartList galaxy nebula saturn c. Modify the Sid (system ID) and Pfile (parameter file location) attributes of the the Oracle resource. For example: # hares -modify VRTdb Sid rac3 -sys saturn # hares -modify VRTdb Pfile Pfile=/oracle/orahome/dbs/initrac3.ora -sys saturn 242 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

259 Removing a Node for Oracle9i on Red Hat d. If you have created a table for in-depth monitoring, modify the Table attribute of the Oracle resource,. For example: # hares -modify VRTdb Table vcstable_saturn -sys saturn e. Close and save the configuration: # haconf -dump -makero 9. To verify the configuration, enter the following command on the new system: # hastop -local All resources should come offline on the new system. 10. Verify that all resources come online after starting VCS on the new system: # hastart Removing a Node for Oracle9i on Red Hat The examples used in these procedures describe removing one node from a three-system cluster. Stopping gsd on page 244 Stopping Applications Using CFS that are Not VCS Controlled on page 244 Stopping VCS on page 244 Uninstalling Locally Installed Oracle (optional) on page 244 Unlinking the Oracle Binary from the VERITAS Libraries on page 245 Running the uninstallsfrac Utility on page 246 Editing VCS Configuration Files on Existing Nodes on page 248 Removing the Infrastructure Packages on page 250 Removing Temporary Files on page 250 Removing VxVM Backup Files on page 251 Removing Licenses on page 252 Chapter 13, Adding and Removing Cluster Nodes Using Oracle9i on Red Hat 243

260 Removing a Node for Oracle9i on Red Hat Stopping gsd To stop the gsd processes On each node to be removed from the Storage Foundation for Oracle RAC cluster, enter as user oracle: $ $ORACLE_HOME/bin/gsdctl stop Stopping Applications Using CFS that are Not VCS Controlled To stop all applications using the CFS mounts not under VCS control 1. Ensure that no processes are using the CFS mount point: # fuser -m mount_point 2. Stop any processes using the CFS mount point. Stopping VCS To stop VCS 1. Log in as root user on the cluster system you are removing. 2. Stop VCS on the node. # hastop -local Stopping VCS takes all service groups on the system offline. Uninstalling Locally Installed Oracle (optional) If Oracle is installed on a cluster file system, skip to Running the uninstallsfrac Utility on page 246. If Oracle is installed locally and you choose not to uninstall Oracle, go to the next section, Unlinking the Oracle Binary from the VERITAS Libraries on page 245. If Oracle is installed locally and you choose to uninstall it, use the Oracle runinstaller utility. Run the utility on each system you are removing from a RAC cluster. 244 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

261 Removing a Node for Oracle9i on Red Hat To uninstall locally installed Oracle 1. On one system, log in as oracle. 2. Set the DISPLAY variable. If you use the Bourne Shell (bash, sh or ksh): $ export DISPLAY=host:0.0 If you use the C Shell (csh or tcsh): $ setenv DISPLAY host: Run the Oracle utility runinstaller. $ cd $ORACLE_BASE/oui/install $./runinstaller As the utility starts up, be sure to select the option to uninstall the Oracle software from the local node you are removing. Refer to the Oracle Installation Guide for additional information about running the utility. Unlinking the Oracle Binary from the VERITAS Libraries If Oracle is on cluster file system, or if you have uninstalled Oracle, skip to Running the uninstallsfrac Utility on page 246. Remove the links to the VERITAS libraries and convert to Oracle single-instance binary. Use the following procedure on each system you are removing from the RAC cluster. To unlink the VERITAS libraries 1. Log in as the oracle user. 2. Using vi or another text editor, edit the init$oracle_sid.ora file. $ cd $ORACLE_HOME/dbs/init$ORACLE_SID.ora $ vi init$oracle_sid.ora Set the cluster_database parameter to FALSE. cluster_database=false 3. Change to the ORACLE_HOME directory. $ cd $ORACLE_HOME/lib Chapter 13, Adding and Removing Cluster Nodes Using Oracle9i on Red Hat 245

262 Removing a Node for Oracle9i on Red Hat 4. Restore the original Oracle libraries. $ rm libodm9.so $ cp libodm9.so_xx_xx_xx-xx_xx_xx libodm9.so $ rm libskgxp9.so $ cp libskgxp9.so_xx_xx_xx-xx_xx_xx libskgxp9.so $ rm libskgxn9.so $ cp libskgxp9.so_xx_xx_xx-xx_xx_xx libskgxn9.so Note Replace the XX_XX_XX-XX_XX_XX in the above example with the most recent timestamp. Running the uninstallsfrac Utility You can run the uninstall script from any node in the cluster, including a node from which you are uninstalling Storage Foundation for Oracle RAC. Note Prior to invoking the uninstallsfrac script, all service groups must be brought offline and VCS must be shut down. For this example, Storage Foundation for Oracle RAC is removed from the node named saturn. To run the uninstallsfrac utility 1. As root user, start uninstalling from any node from which you are removing Storage Foundation for Oracle RAC. Enter: # cd /opt/vrts/install #./uninstallsfrac 2. The welcome screen appears, followed by a notice that the utility discovers configuration files on the system. The information lists all the systems in the cluster and prompts you to indicate whether you want to uninstall from all systems. You must answer N. For example: VCS configuration files exist on this system with the following information: Cluster Name: rac_cluster1 Cluster ID Number: 7 Systems: galaxy nebula saturn Service Groups: cvm oradb1_grp Do you want to uninstall SFRAC from these systems? [y,n,q] (y) n 246 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

263 Removing a Node for Oracle9i on Red Hat Caution Be sure to answer N. Otherwise the utility begins the procedure to uninstall Storage Foundation for Oracle RAC from all systems. 3. The installer prompts you to specify the name of the system from which you are uninstalling Storage Foundation for Oracle RAC: Enter the system names separated by spaces on which to uninstall SFRAC:saturn 4. The uninstaller checks for Storage Foundation for Oracle RAC packages currently installed on your system. It also checks for dependencies between packages to determine the packages it can safely uninstall and in which order. Checking SFRAC packages installed on galaxy: Checking VRTSvxmsa package... version 4.1 installed Checking VRTSvxmsa dependencies... no dependencies Checking VRTSvcsvr package... version 4.1 installed Checking VRTSvcsvr dependencies... no dependencies Checking VRTSvcsvp package... version 4.1 installed Checking VRTSvcsvp dependencies... no dependencies.. Checking VRTSgab package... version 4.1 installed Checking VRTSgab dependencies... VRTSdbac VRTSglm VRTSgms VRTSvcs VRTSvcsag VRTSvcsmg VRTSvxfen Checking VRTSllt package... version 4.1 installed Checking VRTSllt dependencies... VRTSdbac VRTSgab VRTSglm VRTSgms VRTSvxfen. 5. You can proceed with uninstallation when the uninstaller is ready: uninstallsfrac is now ready to uninstall SFRAC packages. All SFRAC processes that are currently running will be stopped. Are you sure you want to uninstall SFRAC packages? [y,n,q] (y) 6. When you press Enter to proceed, the uninstaller stops process and drivers running on each system, and reports its activities. Stopping SFRAC processes on galaxy: Checking VRTSweb process... not running Checking had process... running Stopping had... Done Checking hashadow process... not running Checking CmdServer process... running Killing CmdServer... Done Checking notifier process... not running Checking vcsmm driver... vcsmm module loaded Stopping vcsmm driver... Done.. Chapter 13, Adding and Removing Cluster Nodes Using Oracle9i on Red Hat 247

264 Removing a Node for Oracle9i on Red Hat 7. When the installer begins removing packages from the systems, it indicates its progress by listing each step of the total number of steps required. For example: Uninstalling Storage Foundation for Oracle RAC 4.1 on all systems simultaneously: Uninstalling VRTSvxmsa 4.1 on saturn... Done 1 of 40 steps Uninstalling VRTSdbckp 4.1 on saturn... Done 2 of 40 steps Uninstalling VRTSvcsvr 4.1 on saturn... Done 3 of 40 steps Uninstalling VRTScsocw 4.1 on saturn... Done 4 of 40 steps.. Uninstalling VRTSvmpro 4.1 on saturn... Done 37 of 40 steps Uninstalling VRTSvmdoc 4.1 on saturn...done 38 of 40 steps Uninstalling VRTSvmman 4.1 on saturn...done 39 of 40 steps Uninstalling VRTSvxvm 4.1 on saturn...done 40 of 40 steps Storage Foundation for Oracle RAC package uninstall completed successfully. 8. When the uninstaller is done, it describes the location of a summary file and a log of uninstallation activities. Uninstallation of Storage Foundation for Oracle RAC has completed successfully. The uninstallation summary is saved at: /opt/vrts/install/logs/uninstallsfracdate_time.summary The uninstallsfrac log is saved at: /opt/vrts/install/logs/uninstallsfracdate_time.log Editing VCS Configuration Files on Existing Nodes After running uninstallsfrac, modify the configuration files on the remaining nodes to remove references to the removed nodes. To edit /etc/llthosts On the each of the existing nodes, using vi or another editor, edit the file /etc/llthosts, removing lines corresponding to the removed nodes. For example, if saturn is the node being removed from the cluster, remove the line 2 saturn from the file: 0 galaxy 1 nebula 2 saturn 248 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

265 Removing a Node for Oracle9i on Red Hat It should now resemble: 0 galaxy 1 nebula To edit /etc/gabtab In the file /etc/gabtab, change the command contained in the file to reflect the number of systems after the node is removed: /sbin/gabconfig -c -nn Where N is the number of nodes remaining. For example, with two nodes remaining, the file resembles: /sbin/gabconfig -c -n2 To modify the VCS Configuration to remove a system You can modify the VCS configuration using one of three possible methods. You can edit /etc/vrtsvcs/conf/config/main.cf (the VCS configuration file) directly, you can use the VCS GUI (Cluster Manager), or you can use the command line, as illustrated in the following example. Please refer to the VERITAS Cluster Server User s Guide for details about how to configure VCS. On one of the remaining nodes, as root user, run the following procedure. 1. Make the configuration writable: # haconf -makerw 2. Remove the system from the AutoStartList of the Oracle service group oradb1_grp by specifying the remaining nodes in the desired order: # hagrp -modify oradb1_grp AutoStartList galaxy nebula 3. Remove the system from the SystemList of the oradb1_grp service group: # hagrp -modify oradb1_grp SystemList -delete saturn 4. If you have other Oracle service groups with Oracle resources that have the node to be removed in their configuration, perform step 2 and step 3 for each of them. 5. Delete the system from the CVMCluster resource by removing it from the CVMNodeId attribute keylist: # hares -modify cvm_clus CVMNodeId -delete saturn Chapter 13, Adding and Removing Cluster Nodes Using Oracle9i on Red Hat 249

266 Removing a Node for Oracle9i on Red Hat 6. Delete the system from the cvm AutoStartList, by specifying only the remaining nodes in the desired order: # hagrp -modify cvm AutoStartList galaxy nebula 7. Delete the system from the cvm service group SystemList: # hagrp -modify cvm SystemList -delete saturn 8. After deleting the node to be removed from all service groups in the configuration, delete the removed node from the cluster system list: # hasys -delete saturn 9. Save the new configuration to disk: # haconf -dump -makero Removing Temporary Files After running the uninstallsfrac utility, remove the following directories if they are still present on the removed system: /var/tmp/uninstallsfrac_date /tmp/uninstallsfrac_date Removing the Infrastructure Packages Some packages installed by installsfrac are not removed by uninstallsfrac. These are the infrastructure packages, which can be used by VERITAS products other than VERITAS Storage Foundation for Oracle RAC. These packages include: VRTScpi, VERITAS Cross Product/Platform Installation VRTSmuob, VERITAS Enterprise Administrator Service Localized Package VRTSob, VERITAS Enterprise Administrator Service VRTSobgui, VERITAS Enterprise Administrator VRTSperl, VERITAS Perl Redistribution VRTSvlic, VERITAS License Utilities To remove them, use the uninstallinfr command. For example: 250 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

267 Removing a Node for Oracle9i on Red Hat To remove infrastructure packages 1. As root user, change to the directory containing the uninstallinfr program. # cd /opt/vrts/install 2. Enter the following command to start uninstallinfr: #./uninstallinfr The opening screen appears, prompting you to enter the system from which you want to remove the infrastructure packages: Enter the system names separated by spaces on which to uninstall Infrastructure: saturn 3. The program checks the operating system on the system and sets up a log file at /var/tmp/uninstallinfrdate_time. 4. The program then checks the infrastructure packages on the system, determining whether or not there are dependencies upon them. Packages on which dependencies exist are not uninstalled. The packages having no dependencies can be removed. The program prompts you to continue: Are you sure you want to uninstall Infrastructure packages? [y,n,q] (y) y 5. The utility stops the infrastructure processes that are running, if any, and proceeds to uninstall the packages from the system. When the infrastructure packages are successfully removed, a message resembling the following is displayed: Uninstallation of Infrastructure has completed successfully. The uninstallation summary is saved at: /opt/vrts/install/logs/uninstallinfrdate_time.summary The uninstallinfr log is saved at: /opt/vrts/install/logs/uninstallinfrdate_time.log Removing VxVM Backup Files In the directory /VXVM-CFG-BAK,remove the backup configuration files created by Volume Manager. Chapter 13, Adding and Removing Cluster Nodes Using Oracle9i on Red Hat 251

268 Removing a Node for Oracle9i on Red Hat Removing Licenses Go to the directory /etc/vx/licenses/lic and remove unwanted licenses. You may rename them or move them to another directory if you do not want to remove them outright. 252 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

269 Uninstalling Storage Foundation for Oracle RAC on Systems Using Oracle9i 14 At the completion of the uninstallation activity, you can continue to run Oracle using the single-instance binary generated when you unlink the VERITAS binaries from Oracle. Uninstalling Storage Foundation for Oracle RAC on Oracle9i: Stopping gsd and Applications Using Oracle9i on CFS on page 255 Offlining the Oracle and Netlsnr Resources on page 256 Removing the Oracle Database (optional) on page 256 Uninstalling Oracle9i (optional) on page 256 Unlinking Oracle9i Binaries from VERITAS Libraries on page 257 Removing Storage Foundation for Oracle RAC Packages on page 258 Removing Other VERITAS Packages on page 260 Removing License Files on page 261 Removing Other Configuration Files on page 261 The Uninstallation Flow Chart provides an overview of the steps for uninstalling Storage Foundation for Oracle RAC. 253

270 Uninstallation Flow Chart Start uninstallation on one system Stop gsd and all applications that use CFS Offline Oracle & Netlsnr resources Another system? Yes No Remove database (optional) Remove Oracle? Yes Oracle on CFS? No Remove binaries No Yes Remove binaries Another system? Yes No Oracle on CFS? No Remove links to libraries Yes Remove links to libraries Another system? No Yes Next page 254 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

271 Stopping gsd and Applications Using Oracle9i on CFS Uninstallation Flow Chart continued Continued from previous page Offline Oracle service group Another system? Yes No Run uninstallsfrac Remove license, configuration, and temporary files Another system? Yes No Done Stopping gsd and Applications Using Oracle9i on CFS Stop the gsd processes on each node and ensure that no processes are using CFS mount points. As user oracle, enter the command: $ $ORACLE_HOME/bin/gsdctl stop Chapter 14, Uninstalling Storage Foundation for Oracle RAC on Systems Using Oracle9i 255

272 Offlining the Oracle and Netlsnr Resources Offlining the Oracle and Netlsnr Resources Offline the Oracle and Netlsnr resources on each node. For example: # hares -offline VRT -sys galaxy # hares -offline rac -sys galaxy # hares -offline VRT -sys nebula # hares -offline rac -sys nebula # hares -offline LISTENER -sys galaxy # hares -offline LISTENER -sys nebula These commands stop the Oracle instances and listeners running on the specified systems. Removing the Oracle Database (optional) You can remove the Oracle database after safely relocating the data as necessary. Uninstalling Oracle9i (optional) To uninstall Oracle9i, use the runinstaller utility. You must run the utility on each system if Oracle9i is installed locally. If Oracle9i is installed on a cluster file system, you need only run the utility once. 1. On one system, log in as oracle. 2. Set the DISPLAY variable. Example: for the Bourne Shell (bash, sh or ksh): $ export DISPLAY=host:0.0 Example: for the C Shell (csh or tcsh): $ setenv DISPLAY host: Run the Oracle9i utility runinstaller. $ cd $ORACLE_BASE/oui/install $ runinstaller As the utility starts up, be sure to select the option to uninstall the Oracle9i software. Refer to the Oracle9i Installation Guide for addition information about running the utility. 4. If necessary, remove Oracle9i from the other systems. 256 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

273 Unlinking Oracle9i Binaries from VERITAS Libraries Unlinking Oracle9i Binaries from VERITAS Libraries If you have uninstalled Oracle, skip this procedure. If you have not uninstalled Oracle, unlink the VERITAS libraries, using the following procedure, which generates a single-instance Oracle binary. To unlink Oracle9i binaries from VERITAS libraries 1. Log in as the oracle user. 2. Using vi or another text editor, edit the init$oracle_sid.ora file. $ cd $ORACLE_HOME/dbs/init$ORACLE_SID.ora $ vi init$oracle_sid.ora Set the cluster_database parameter to FALSE. cluster_database=false 3. Change to the ORACLE_HOME directory. $ cd $ORACLE_HOME/lib 4. Restore the original Oracle libraries. $ rm libodm9.so $ cp libodm9.so_xx_xx_xx-xx_xx_xx libodm9.so $ rm libskgxp9.so $ cp libskgxp9.so_xx_xx_xx-xx_xx_xx libskgxp9.so $ rm libskgxn9.so $ cp libskgxp9.so_xx_xx_xx-xx_xx_xx libskgxn9.so Note Replace the XX_XX_XX-XX_XX_XX in the above example with the most recent timestamp. Chapter 14, Uninstalling Storage Foundation for Oracle RAC on Systems Using Oracle9i 257

274 Removing Storage Foundation for Oracle RAC Packages Removing Storage Foundation for Oracle RAC Packages The uninstallsfrac script removes packages installed by installsfrac on all systems in the cluster. The installer removes all Maintenance Pack 1 packages as well as the Storage Foundation for Oracle RAC 4.1 rpms, regardless of the version of Oracle used. To run uninstallsfrac 1. Offline all service groups and shutdown VCS prior to launching the uninstallsfrac script: # /opt/vrtsvcs/bin/hastop -all Note Do not use the -force option when executing hastop. This will leave all service groups online and shut down VCS, causing undesired results during the uninstallation of Storage Foundation for Oracle RAC. 2. As root user, change to the directory containing the uninstallsfrac program. # cd /opt/vrts/install 3. Enter the following command to start uninstallsfrac: #./uninstallsfrac The utility reports the cluster and systems for uninstalling. VCS configuration files exist on this system with the following information: Cluster Name: rac_cluster1 Cluster ID Number: 7 Systems: galaxy nebula Service Groups: ClusterService Do you want to uninstall SFRAC from these systems? [y,n,q] (y) y 4. Enter y if the cluster information is correct. The utility performs and displays a series of system checks for operating system level and system-to-system communication. Note that a log, with the date and time as part of its name, is created for the uninstallation: Logs for uninstallsfrac are being created in /var/tmp/uninstallsfracxxxxxxxxx. Using /usr/bin/ssh -x and /usr/bin/scp to communicate with remote systems. Initial system check completed successfully. Press [Enter] to continue: 258 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

275 Removing Storage Foundation for Oracle RAC Packages The uninstaller checks for Storage Foundation for Oracle RAC packages currently installed on your system, and for dependencies between packages to determine the packages it can safely uninstall and in which order. The uninstaller stops process and drivers running on each system, and reports its activities. Are you sure you want to uninstall SFRAC rpms? [y,n,q] (y) y Stopping Cluster Manager and Agents on galaxy... Done Stopping Cluster Manager and Agents on nebula... Done Checking galaxy for open volumes... None Checking for patch(1) rpm... version installed Stopping SFRAC processes on galaxy: Checking vxodm driver... vxodm module loaded Stopping vxodm driver... Done Unloading vxodm module on galaxy... Done Checking vxgms driver... vxgms module loaded Stopping vxgms driver... Done Unloading vxgms module on galaxy... Done Checking had process... not running Checking hashadow process... not running Checking CmdServer process... running Killing CmdServer... Done Checking notifier process... not running Checking vxsvc process... running Stopping vxsvc... Done Checking vcsmm driver... not running Checking lmx driver... not running Checking vxfen driver... not running Checking gab driver... gab module loaded Stopping gab driver... Done Unloading gab module on galaxy... Done Checking llt driver... llt module loaded Stopping llt driver... Done Unloading llt module on... Done Checking nebula for open volumes... None Checking for patch(1) rpm... version installed Stopping SFRAC processes on nebula: Checking vxodm driver... vxodm module loaded Stopping vxodm driver... Done Unloading vxodm module on nebula... Done Checking vxgms driver... vxgms module loaded Stopping vxgms driver... Done Unloading vxgms module on nebula... Done Checking had process... not running Checking hashadow process... not running Checking CmdServer process... running Killing CmdServer... Done Checking notifier process... not running Checking vxsvc process... running Stopping vxsvc... Done Checking vcsmm driver... not running Checking lmx driver... not running Chapter 14, Uninstalling Storage Foundation for Oracle RAC on Systems Using Oracle9i 259

276 Removing Other VERITAS Packages Checking vxfen driver... not running Checking gab driver... gab module loaded Stopping gab driver... Done Unloading gab module on nebula... Done Checking llt driver... llt module loaded Stopping llt driver... Done Unloading llt module on... Done 5. When the installer begins removing packages from the systems, it indicates its progress by listing each step of the total number of steps required. For example: Uninstalling Storage Foundation for Oracle RAC 4.1 on all systems simultaneously: Uninstalling VRTScsocw on nebula... Done Uninstalling VRTScsocw on galaxy... Done Uninstalling VRTSvcsor on nebula... Done Uninstalling VRTSvcsor on galaxy... Done Uninstalling VRTSdbac on nebula... Done Uninstalling VRTSdbac on galaxy... Done Uninstalling VRTSglm on nebula... Done Uninstalling VRTSglm on galaxy... Done Uninstalling VRTScavf on galaxy... Done Storage Foundation for Oracle RAC rpm uninstall completed successfully: 6. When the uninstaller is done, it describes the location of a summary file and a log of uninstallation activities. Uninstallation of Storage Foundation for Oracle RAC has completed successfully. The uninstallation summary is saved at: /opt/vrts/install/logs/uninstallsfracxxxxxxxxx.summary The uninstallsfrac log is saved at: /opt/vrts/install/logs/uninstallsfracxxxxxxxxx.log Removing Other VERITAS Packages Some packages installed by installsfrac are not removed by uninstallsfrac. These packages, which can be used by VERITAS products other than VERITAS Storage Foundation for Oracle RAC, include: VRTScpi VRTSob VRTSobgui VRTSperl VRTSvlic VERITAS Cross Product/Platform Installation VERITAS Enterprise Administrator Service VERITAS Enterprise Administrator VERITAS Perl Redistribution VERITAS License Utilities If you want to remove them, use the uninstallinfr script. 260 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

277 Removing License Files To uninstall the VERITAS infrastructure rpms # cd /opt/vrts/install #./uninstallinfr Removing License Files 1. To see what license key files you have installed on a system, enter: # /sbin/vxlicrep The output lists the license keys and information about their respective products. 2. Go to the directory containing the license key files and list them. Enter: # cd /etc/vx/licenses/lic # ls -a 3. Using the output from step 1, identify and delete unwanted key files listed in step 2. Unwanted keys may be deleted by removing the license key file. Removing Other Configuration Files You can remove the following configuration files: /etc/vcsmmtab /etc/vxfentab /etc/vxfendg Chapter 14, Uninstalling Storage Foundation for Oracle RAC on Systems Using Oracle9i 261

278 Removing Other Configuration Files 262 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

279 Section IV. Using Oracle 10g on SLES9 The chapters in this section assume you have used the overview, planning, and installing chapters to install and configure Storage Foundation for Oracle RAC. See Using Storage Foundation for Oracle RAC on page 1. Procedures for using Oracle 10g with Storage Foundation for Oracle RAC on SLES9 are provided in this section. Chapter 15. Installing Oracle 10g Software on SLES9 on page 265 Chapter 16. Configuring VCS Service Groups for Oracle 10g on SLES9 on page 299 Chapter 17. Adding and Removing Cluster Nodes for Oracle 10g on SLES9 on page 311 Chapter 18. Uninstalling Storage Foundation for Oracle RAC on Oracle 10g Systems on page

280

281 Installing Oracle 10g Software on SLES9 15 Use this chapter to install Oracle 10g software on clean SLES9 systems. You can install Oracle 10g binaries on shared storage or locally on each node. Storage Foundation for Oracle RAC supports Oracle 10g Release 1. For applying patchsets, see Applying Oracle 10g Patchsets on SLES9 on page 294. For Oracle information, access the Oracle Technology Network. See B , Oracle Real Application Clusters Installation and Configuration Guide. The examples in this guide primarily show the shared storage installation. The configuration of VCS service groups described in Configuring VCS Service Groups for Oracle 10g on Red Hat on page 137 and the example main.cf file shown in Sample Configuration Files on page 343 are based on using shared storage for the Oracle 10g binaries. Configuring Oracle 10g Release 1 Prerequisites Prerequisites after installing Storage Foundation for Oracle RAC: Creating OS Oracle User and Group on page 266 Creating CRS_HOME on page 266 Creating Volumes or Directories for OCR and Vote Disk on page 268 Configuring Private IP Addresses on All Cluster Nodes on page 270 Obtaining Public Virtual IP Addresses for Use by Oracle on page

282 Configuring Oracle 10g Release 1 Prerequisites Creating OS Oracle User and Group On each system, create a local group and local user for Oracle. For example, create the group oinstall and the user oracle. Be sure to assign the same group ID, user ID, and home directory for the user on each system. To create OS Oracle user and group on each system 1. Create the oinstall group on each system: # groupadd -g 1000 oinstall # groupadd -g 1001 dba 2. Create the oracle user on each system. The user id should be the same on all nodes. useradd -g oinstall -u <User id> -G dba -d /oracle oracle 3. Enable rsh and key-based authentication ssh for the oracle user on all nodes. Creating CRS_HOME On each system in the SFRAC cluster, create a directory for CRS_HOME. The disk space required is 0.5 GB minimum. To create CRS_HOME on each system 1. Log in as root user on one system. # su - root 2. Create groups and users. a. Referring to the Oracle Real Application Clusters Installation and Configuration Guide, create the groups oinstall (the Oracle Inventory group) and dba, and the user oracle, assigning the primary group for oracle to be oinstall and the secondary group for oracle to be dba. Assign a password for oracle user. b. On the original node determine the user and group IDs and use the identical IDs on each of the other nodes. Assign identical passwords for the user oracle. 266 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

283 Configuring Oracle 10g Release 1 Prerequisites 3. On one node, create a disk group. For example: # vxdg init crsdg sdc For shared CRS Home on CVM master: # vxdg -s init crsdg sdc 4. Create the volume in the group for the CRS_HOME. The volume should be a minimum of 0.5 GB: # vxassist -g crsdg make crsvol 500M 5. Start the volume: # vxvol -g crsdg startall 6. Create a VxFS file system on which to install CRS. For example: # mkfs -t vxfs /dev/vx/rdsk/crsdg/crsvol 7. Create the mount point for the CRS_HOME: # mkdir /oracle/crs Note Make sure that CRS_HOME is a subdirectory of ORACLE_BASE. 8. Mount the file system, using the device file for the block device: # mount -t vxfs /dev/vx/dsk/crsdg/crsvol /oracle/crs For shared CRS Home on CVM master: # mount -t vxfs -o /dev/vx/dsk/crsdg/crsvol /oracle/crs 9. For local mount only, edit the /etc/fstab file and list the new file system. For example: /dev/vx/dsk/crsdg/crsvol /oracle/crs vxfs defaults Set the CRS_HOME directory for the oracle user as /oracle/crs. 11. Assign ownership of the directory to oracle and the group oinstall: # chown -R oracle:oinstall /oracle/crs 12. On each cluster, repeat step 1 through step 11. For shared CRS_HOME repeat step 7 through step 8. Chapter 15, Installing Oracle 10g Software on SLES9 267

284 Configuring Oracle 10g Release 1 Prerequisites Creating Volumes or Directories for OCR and Vote Disk The OCR and Vote disk must be shared among all nodes in a cluster. Whether you create shared volumes or shared file system directories, you can add them in the VCS configuration to make them highly available. The ORACLE_BASE directory contains CRS_HOME and ORACLE_HOME. Create OCR and Vote disk directories in a cluster file system, or create OCR and Vote disk shared raw volumes. VERITAS recommends creating directories in a file system for the OCR and Vote disk directories instead of creating separate OCR and Vote disk volumes. To create OCR and Vote disk directories on CFS 1. Log in as root user. # su - root 2. On the master node, create a shared disk group: # vxdg -s init ocrdg sdz 3. Create volume in the shared group for OCR and Vote disk: # vxassist -g ocrdg make ocrvol 200M 4. Assign ownership of the volumes using the vxedit command: # vxedit -g ocrdg set user=oracle group=oinstall mode=660 ocrvol # vxedit -g ocrdg set user=oracle group=oinstall mode=660 vdvol 5. Start the volume: # vxvol -g ocrdg startall 6. Create the file system: # mkfs -t vxfs /dev/vx/rdsk/ocrdg/ocrvol 7. Create a mount point for the Vote disk and OCR files on all the nodes. # mkdir /ora_crs 8. On all nodes, mount the file system: # mount -t vxfs -o cluster /dev/vx/dsk/ocrdg/ocrvol /ora_crs 268 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

285 Configuring Oracle 10g Release 1 Prerequisites 9. Create a directory for the Vote disk and OCR files. # cd /ora_crs When installing CRS, specify the following for OCR and Vote disk respectively: /ora_crs/ocr /ora_crs/vote 10. Set oracle to be the owner of the file system, and set 0660 as the permissions: # chown -R oracle:oinstall /ora_crs # chmod 755 /ora_crs To create OCR and Vote disk on raw volumes 1. Log in as root user. 2. On the CVM master node, create a shared disk group: # vxdg -s init ocrdg sdz 3. Create volumes in the shared group for OCR and Vote disk: # vxassist -g ocrdg make ocrvol 100M # vxassist -g ocrdg make vdvol 100M 4. Assign ownership of the volumes using the vxedit command: vxedit -g disk_group set group=group user=user mode=660 volume ocrvol vxedit -g disk_group set group=group user=user mode=660 volume vdvol 5. Start the volume: # vxvol -g ocrdg startall 6. When installing CRS, specify the following for OCR and Vote disk: OCR: VD: /dev/vx/rdsk/ocrdg/ocrvol /dev/vx/rdsk/ocrdg/vdvol Chapter 15, Installing Oracle 10g Software on SLES9 269

286 Configuring Oracle 10g Release 1 Prerequisites Configuring Private IP Addresses on All Cluster Nodes The CRS daemon requires a private IP address on each system to enable communications and heartbeating. Do the following to set up the private IP addresses. To configure private IP addresses on all cluster nodes 1. On each cluster system, determine a private NIC device for which LLT is configured. Look at the file /etc/llttab. For example, if eth0 is used as an LLT interconnect on one system, you can configure an available IP address for it. Example commands: # ifconfig eth0 down # ifconfig eth0 inet netmask # ifconfig eth0 up Configure one private NIC on each node. Note The private IP addresses of all nodes should be on the same physical network in the same IP subnet. 2. On each system, add the configured private IP addresses of all nodes to the /etc/hosts file, mapping them to symbolic names. Example: galaxy_priv nebula_priv 3. From each system, try pinging each of the other nodes, using the symbolic system name associated with the private NIC IP address. After configuring the IP addresses, you can edit the CVM service group and add the PrivNIC resource to make the IP addresses highly available. See Configuring VCS Service Groups for Oracle 10g on SLES9 on page 299. Obtaining Public Virtual IP Addresses for Use by Oracle Before starting the Oracle installation, you must create virtual IP addresses for each node. An IP address and an associated host name should be registered in the domain name service (DNS) for each public network interface. 270 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

287 Configuring Oracle 10g Release 1 Prerequisites To obtain public virtual IP addresses for use by Oracle 1. Obtain one virtual IP per node. 2. Add entry for the virtual IP and virtual public name in the /etc/hosts file, for all nodes. 3. Register with DNS. Example: galaxy_pub nebula_pub Installing Oracle 10g CRS and Database Installing the Oracle 10g software on shared storage in a RAC environment requires you to create volumes and directories on CFS. Three options for installing are supported: Installing Oracle 10g on a Shared Disk on page 271 Installing Oracle 10g Release 1 Locally on page 277 Installing Oracle 10g Release 1 Manually for x86_64 on page 282 Installing Oracle 10g Release 1 Manually for IA64 on page 289 Installing Oracle 10g on a Shared Disk To prepare to install Oracle 10g Release 1 on a shared disk 1. Log into any system of the cluster as the root user. # su - root 2. On the master node create a shared disk group. a. Enter: # vxdg -s init orabinvol_dg sdd b. Create the volume in the shared group: # vxassist -g orabinvol_dg make orabinvol 3000M For the Oracle 10g binaries, make the volume 3 GB. Chapter 15, Installing Oracle 10g Software on SLES9 271

288 Configuring Oracle 10g Release 1 Prerequisites c. Start the volume: # vxvol -g orabinvol_dg startall d. On the master node, create a VxFS file system on the shared volume on which to install the Oracle 10g binaries. For example, create the file system on orabinvol: # mkfs -t vxfs /dev/vx/dsk/orabinvol_dg/orabinvol 3. On each system, create the mount point for Oracle binaries and mount the file system. a. Create the mount point for the Oracle binaries if it does not already exist: # mkdir /oracle/10g b. Mount the file system, using the device file for the block device: # mount -t vxfs -o cluster /dev/vx/dsk/orabinvol_dg/orabinvol /oracle/10g 4. From the CVM master, execute: # vxedit -g orabinvol_dg set user=oracle group=oinstall mode=660 orabinvol 5. Set oracle to be the owner of the file system, and set 0660 as the permissions: # chown oracle:oinstall /oracle To Install Oracle 10g Release 1 on a shared disk 1. Make sure that the Oracle installer is in a directory that is writable. If you are using the CD-ROM, ensure that the Oracle installation files are copied locally. 2. On the first system, set the following variables in root s environment on the node from which installsfrac -configure will be executed. a. For Bourne shell (bash, sh, or ksh), enter: # export ORACLE_BASE=/oracle # export DISPLAY=host:0.0 b. For the C Shell (csh or tcsh): # setenv ORACLE_BASE /oracle # setenv DISPLAY host: VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

289 Configuring Oracle 10g Release 1 Prerequisites 3. Set the X server access control: # xhost + <hostname or ip address> Where <hostname or ip address> is the hostname or IP address of the server to which you are displaying. Example: # xhost being added to access control list Note By default, the installsfrac utility uses ssh for remote communication. However, rsh can be used in place of ssh by using the -usersh option with the installsfrac utility. The installation of Oracle 10g requires that rsh be configured on all nodes. Refer to the Oracle 10g documentation for details on configuring rsh. 4. On the same node where you have set the environment variables, execute the following command as root: # cd /opt/vrts/install #./installsfrac configure The installer will display the copyright message. 5. When the installer prompts, enter the system names separated by spaces on which to configure Storage Foundation for Oracle RAC. For the installation example used in this procedure: # galaxy nebula The installer checks both systems for communication and creates a log directory on the second system in /var/tmp/installsfracxxxxxxxxxx. 6. When the initial system check is successfully completed, press Enter to continue. 7. The installer proceeds to verify the license keys. a. Enter additional licenses at this time if any are needed. b. When the licenses are successfully verified, press Enter to continue. Chapter 15, Installing Oracle 10g Software on SLES9 273

290 Configuring Oracle 10g Release 1 Prerequisites 8. The installer presents task choices for installing and configuring, depending on the operating system you are running. Example: This program enables you to perform one of the following tasks: 1) Install Oracle 10g. 2) Relink SFRAC for Oracle 10g. 3) Configure different components of SFRAC Enter your choice [1-3]: [?] Select Install Oracle 10g. The installer proceeds to check environment settings. 9. Set Oracle directories. a. When prompted, enter the directory name for CRS_HOME relative to the ORACLE_BASE directory. b. When prompted, enter the directory name for ORACLE_HOME relative to the ORACLE_BASE directory. The Installer proceeds to validate ORACLE_HOME and check the node list. c. Press Enter to continue. 10. Configure user accounts. a. Enter Oracle Unix User Account when prompted. The installer checks for the user on all systems. b. Enter Oracle Inventory group when prompted. The installer checks for group existence on all systems c. Press Enter to continue. 11. Enter the Oracle installer path for CRS when prompted. Specify the disk in your Oracle media kit where the CRS binaries reside. The installer validates the CRS installer. Example: /<CRS_Disk>/Disk1 In the example, <CRS_Disk> is the disk where the CRS binaries reside. 274 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

291 Configuring Oracle 10g Release 1 Prerequisites 12. Enter the oracle installer path when prompted. The installer validates the Oracle Installer, copies files, creates directories based on the information provided, and invokes the Oracle CRS Installer. Example: /DB_Disk/Disk1/ In the example,<db_disk> is the disk where the Oracle binaries reside. When the Oracle CRS Installer appears, it prompts for the following: a. Specify the file source and destination and click Next. b. Specify the name for the install and CRS_HOME and click Next. c. Select the language and click Next. d. The host names of clustered machines display. Specify private names before them for the nodes in the cluster and click Next. e. Specify the Private Interconnect and click Next. f. Specify the OCR file name with an absolute path, for example the /ora_crs/ocr/ocr file, and click Next. g. Specify CSS (Vote disk) file name with an absolute path, for example the /ora_crs/css/css file, and click Next. The installer proceeds with the CRS installation and sets the CRS parameters. h. When prompted at the end of the CRS installation, run the $CRS-HOME/root.sh file on each cluster node. i. Exit the CRS Installer after running root.sh and continue with installsfrac -configure for the Oracle 10g binaries installation. 13. Press Enter to continue. The Oracle 10g database installer window displays. a. Specify the file locations and click Next. b. Select all nodes in the cluster and click Next. c. Choose the installation type. The installer verifies that the requirements are all met. Chapter 15, Installing Oracle 10g Software on SLES9 275

292 Configuring Oracle 10g Release 1 Prerequisites d. Provide OCSDBA and OSOPER group for UNIX Oracle users when prompted. e. When prompted by the Create Database option, select No. f. Install the binaries now. g. The installer prompts you to run $ORACLE_HOME/root.sh on each node. The VIPCA utility displays on the first node only. Export LD_ASSUME_KERNEL= See Oracle Metalink Doc id Export DISPLAY=<ip>:0 Execute ORACLE_HOME/root.sh. Select the public NIC for all the systems. Enter the Virtual IP addresses for the public NIC. h. The installer confirms when installation is successful. Exit the Oracle 10g Installer and return to installsfrac -configure. 14. In installsfrac -configure, press Enter to continue. The success of the configuration is reported. The configuration summary is saved at: /opt/vrts/install/logs/installsfracxxxxxxxxxx.summary The installsfrac log is saved at: /opt/vrts/install/logs/installsfracxxxxxxxxxx.log 15. After successful installation of CRS and Oracle 10g, a RAC database can be configured. Use your own tools or see Creating a Starter Database on page 351. For IA64 architecture, install Oracle patch # See Applying Oracle 10g Patchsets on SLES9 on page Bring CVM and Private NIC under VCS control. This can be achieved by: Using the VCS Oracle RAC Configuration wizard. See Creating an Oracle Service Group on page 299. Manually editing the VCS configuration file. See Configuring CVM Service Group for Oracle 10g Manually on page 305, and Configuring the PrivNIC Agent on page 369. If the nodes are rebooted before configuring the CVM service group, the services will not start on their own. 276 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

293 Installing Oracle 10g Release 1 Locally Configuring Oracle 10g Release 1 Prerequisites Use these procedures to install Oracle 10g on each system locally in a VERITAS Storage Foundation 4.1 for Oracle RAC environment. To prepare to install Oracle 10g Release 1 locally 1. Log in as root user on one system. 2. On one node, create a disk group. a. Enter: # vxdg init or_dg sdz b. Create the volume in the group: # vxassist -g or_dg make or_vol 5000M For the Oracle 10g binaries, make the volume 5,000 MB. c. Start the volume: # vxvol -g or_dg startall d. Create a VxFS file system on orabinvol to install the Oracle 10g binaries. For example: # mkfs -t vxfs /dev/vx/dsk/or_dg/or_vol 3. Create the mount point for the file system. a. Enter: # mkdir /oracle b. Mount the file system, using the device file for the block device: # mount -t vxfs /dev/vx/dsk/or_dg/or_vol /oracle c. To mount the file system automatically across reboot, edit the /etc/fstab file, and add the new file system. For example: /dev/vx/dsk/or_dg/or_vol /oracle vxfs defaults Create a local group and a local user for Oracle. For example, create the group oinstall and the user oracle. Be sure to assign the same user ID and group ID for the user on each system. Chapter 15, Installing Oracle 10g Software on SLES9 277

294 Configuring Oracle 10g Release 1 Prerequisites 5. Set the home directory for the oracle user as /oracle. 6. From the CVM master, execute: # vxedit -g orabinvol_dg set user=oracle group=oinstall mode=660 orabinvol 7. Repeat step 1 through step 6 on the other systems. 8. Set the X server access control: # xhost + <hostname or ip address> Where <hostname or ip address> is the hostname or IP address of the server to which you are displaying. Example: # xhost being added to access control list 9. On the first system, set the following variables in root s environment on the node from which installsfrac -configure will be executed. a. For Bourne shell (bash, sh, or ksh), enter: # export ORACLE_BASE=/oracle # export DISPLAY=host:0.0 b. For the C Shell (csh or tcsh): # setenv ORACLE_BASE /oracle # setenv DISPLAY host:0.0 To install Oracle 10g Release 1 locally 1. Ensure that the Oracle installer is in a directory that is writable. If you are using the CD-ROM, ensure that the Oracle installation files are copied locally. Note By default, the installsfrac utility uses ssh for remote communication. However, rsh can be used in place of ssh by using the -usersh option with the installsfrac utility. 278 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

295 Configuring Oracle 10g Release 1 Prerequisites 2. On the same node where you have set the environment variables, execute the following command as root: # cd /opt/vrts/install #./installsfrac configure The installer will display the copyright message. 3. When the installer prompts, enter the system names separated by spaces on which to configure Storage Foundation for Oracle RAC. For the installation example used in this procedure: galaxy nebula The installer checks both systems for communication and creates a log directory on the second system in /var/tmp/installsfracxxxxxxxxxx. 4. When the initial system check is successfully completed, press Enter to continue. 5. The installer proceeds to verify the license keys. a. Enter additional licenses at this time if any are needed. b. When the licenses are successfully verified, press Enter to continue. 6. The installer presents task choices for installing and configuring, depending on the operating system you are running. Example: This program enables you to perform one of the following tasks: 1) Install Oracle 10g. 2) Relink SFRAC for Oracle 10g. 3) Configure different components of SFRAC Enter your choice [1-3]: [?] Select Install Oracle 10g. The installer proceeds to check environment settings. Chapter 15, Installing Oracle 10g Software on SLES9 279

296 Configuring Oracle 10g Release 1 Prerequisites 7. Configure the Oracle directories: a. When prompted, enter the directory name for CRS_HOME relative to the ORACLE_BASE directory. b. When prompted, enter the directory name for ORACLE_HOME relative to the ORACLE_BASE directory. The Installer proceeds to validate ORACLE_HOME and check the node list. c. Press Enter to continue. 8. Set user accounts. a. Enter Oracle Unix User Account when prompted. The installer checks for the user on all systems. b. Enter Oracle Inventory group when prompted. The installer checks for group existence on all systems c. Press Enter to continue. 9. Enter the oracle installer path for CRS when prompted. The installer validates the CRS installer. 10. Enter the oracle installer path when prompted. The installer validates the Oracle Installer, copies files, creates directories based on the information provided, and invokes the Oracle CRS Installer. When the Oracle CRS Installer appears, it prompts for the following: a. Specify the file source and destination and click Next. b. Run orainstroot.sh on all nodes. If it does not exist, copy it to all nodes. c. Specify the name for the install and CRS_HOME and click Next. d. Select the language and click Next. e. Specify the host names and private names for the nodes in the cluster and click Next. f. Specify the Private Interconnect and click Next. g. Specify the OCR file name with an absolute path. For example: /ora_crs/ocr_css/ocr file and click Next. 280 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

297 Configuring Oracle 10g Release 1 Prerequisites h. Specify CSS (Vote disk) file name with an absolute path. For example: /ora_crs/ocr_css/css file and click Next. The installer proceeds with the CRS installation and sets the CRS parameters. i. When prompted at the end of the CRS installation, run the $CRS-HOME/root.sh file on each cluster node. j. Exit the CRS Installer after running root.sh and continue with installsfrac -configure for the Oracle 10g binaries installation. 11. Press Enter to continue. The Oracle 10g database installer window displays. a. Specify the file locations and click Next. b. Select all nodes in the cluster and click Next. c. Choose the installation type. The installer verifies that the requirements are all met. d. Provide OCSDBA and OSOPER group for UNIX Oracle users when prompted. e. When prompted to create the database, select the No option. f. Install the binaries now. g. The installer prompts you to run $ORACLE_HOME/root.sh on each node. The VIPCA utility displays on the first node only. Export LD_ASSUME_KERNEL= See Oracle Metalink Doc id Export DISPLAY=<ip>:0 Execute ORACLE_HOME/root.sh. Select the public NIC for all the systems. Enter the Virtual IP addresses for the public NIC. h. The installer confirms when installation is successful. Exit the Oracle 10g Installer and return to installsfrac -configure. Chapter 15, Installing Oracle 10g Software on SLES9 281

298 Configuring Oracle 10g Release 1 Prerequisites 12. In installsfrac -configure, press Enter to continue. The success of the configuration is reported. The configuration summary is saved at: /opt/vrts/install/logs/installsfracxxxxxxxxxx.summary The installsfrac log is saved at: /opt/vrts/install/logs/installsfracxxxxxxxxxx.log 13. For IA64 architecture, install Oracle patch # See Applying Oracle 10g Patchsets on SLES9 on page After successful installation of CRS and Oracle 10g, a RAC database can be configured. Use your own tools or see Creating a Starter Database on page Bring CVM and Private NIC under VCS control. This can be achieved by: Using the VCS Oracle RAC Configuration wizard. See Creating an Oracle Service Group on page 299. Manually editing the VCS configuration file. See Configuring CVM Service Group for Oracle 10g Manually on page 305, and Configuring the PrivNIC Agent on page 369. If the nodes are rebooted before configuring the CVM service group, the services will not start on their own. Installing Oracle 10g Release 1 Manually for x86_64 Typically, VERITAS recommends using installsfrac -configure to install the Oracle 10g RAC binaries. However, some situations may require manual installation of the Oracle 10g RAC binaries. The following steps are required to install Oracle 10g manually: Patching the CRS OUI Pre-installation tasks OUI-based installation for CRS OUI-based installation for the database Post installation relinking Post installation configuration 282 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

299 Configuring Oracle 10g Release 1 Prerequisites To patch the CRS OUI 1. Make sure that the installation source CRS' Disk1 directory has: write permission for Oracle user read/execute permissions for database' Disk1 & Disk2 directories. 2. Log in as oracle. # su - oracle The Oracle CRS installer must be patched so that it will detect the presence of VCS and use the correct cluster membership. If the OUI has been patched previously using this procedure, then proceed to the next section. 3. Search for the clusterqueries.jar file inside the CRS OUI directory structure. With the current release of OUI, it may be at the following location. Copy it to a temporary location: $ cp <installer_directory>/disk1/stage/queries/clusterqueries/ /1/clusterQueries.jar /tmp/jar 4. Unzip this file in some temporary location such as/tmp/jar. This directory should contain coe.tar under /tmp/jar/bin/linux/ directory. # unzip /tmp/jar/clusterqueries.jar bin/linux/coe.tar -d /tmp/jar 5. Extract this file at another temporary location such as/tmp/tar. # cd /tmp/tar # tar -xvf /tmp/jar/bin/linux/coe.tar 6. Backup the original lsnodes.sh script. # cp lsnodes.sh lsnodes.sh.orig Chapter 15, Installing Oracle 10g Software on SLES9 283

300 Configuring Oracle 10g Release 1 Prerequisites 7. Patch the lsnodes.sh file as follows which will create a new file lsnodes_new.sh: # cat lsnodes.sh sed -e '/CM / i #Patch to Check if something is \ present in central location \ if [ -d /opt/orclcluster/lib ]; then \ CL="/opt/ORCLcluster/lib" \ export LD_LIBRARY_PATH=\$CL:\$LD_LIBRARY_PATH \ cd $base \ ret=`./lsnodes` \ if [ $? = 0 ]; then \ echo "CL"; \ exit; \ fi \ fi \ ' > lsnodes_new.sh The lsnodes.sh file determines whether to use existing cluster membership or ask the user to input a new set of nodes that will form the cluster. This script will acknowledge the existence of a cluster only if Oracle's Cluster Manager (oracm) is running. The above patch changes this behavior such that the lsnodes.sh script will acknowledge the presence of the cluster if the /opt/orclcluster/lib directory is present and lsnodes.sh executes correctly. 8. Overwrite the existing lsnodes.sh by the new lsnodes_new.sh file. # cp lsnodes_new.sh lsnodes.sh 9. Ensure that lsnodes.sh file has permissions set to 755. # chmod 755 lsnodes.sh 10. Backup the lsnodes_get.sh file # cp lsnodes_get.sh lsnodes_get.sh.orig 11. Patch the lsnodes_get.sh as follows which will create a new file lsnodes_get_new.sh: # cat lsnodes_get.sh sed -e ' s_cl="/etc/orclcluster/oracm/lib"_\ if [ -d /opt/orclcluster/lib ]; then \ CL="/opt/ORCLcluster/lib" \ else \ CL="/etc/ORCLcluster/oracm/lib" \ fi _ '> lsnodes_get_new.sh The lsnodes_get.sh script queries lsnodes command for the cluster members. This patch sets the CL variable to /opt/orclcluster/lib which is exported as the LD_LIBRARY_PATH value. This allows the VERITAS vcsmm to be used while determining the cluster membership. 284 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

301 Configuring Oracle 10g Release 1 Prerequisites 12. Overwrite the existing lsnodes_get.sh by the new lsnodes_get_new.sh file. # cp lsnodes_get_new.sh lsnodes_get.sh 13. Ensure that lsnodes_get.sh file has permissions set to 755. # chmod 755 lsnodes_get.sh 14. Delete lsnodes_new.sh and lsnodes_get_new.sh files. # rm -f lsnodes_new.sh lsnodes_get_new.sh 15. Re-create the coe.tar from the /tmp/tar location as follows: # tar cvf /tmp/coe.tar -C /tmp/tar/. 16. Overwrite the old coe.tar present under /tmp/jar/bin/linux/ directory. 17. Create the jar file as follows: # cd /tmp/jar # zip -r clusterqueries.jar bin/linux/coe.tar Make sure that the -M option is passed while creating the jar. The jar files do not require manifest file. 18. Copy this patched jar file back into the OUI location. To complete the pre-installation tasks Execute procedure as Oracle user. 1. Use the steps in Configuring Oracle 10g Release 1 Prerequisites on page 265 to set up the required mounts (Oracle home, CRS home and CRS, Vote disk) and other pre-requisites for Oracle 10g: a. Creating OS Oracle User and Group on page 266 b. Creating CRS_HOME on page 266 c. Creating Volumes or Directories for OCR and Vote Disk on page 268 d. Configuring Private IP Addresses on All Cluster Nodes on page 270 e. Obtaining Public Virtual IP Addresses for Use by Oracle on page Create /opt/orclcluster/lib directory on all cluster nodes. # mkdir /opt/orclcluster/lib Chapter 15, Installing Oracle 10g Software on SLES9 285

302 Configuring Oracle 10g Release 1 Prerequisites 3. Copy /usr/lib64/libvcsmm.so to /opt/orclcluster/lib directory as libskgxn2.so and libcmdll.so on all cluster nodes. # cp /usr/lib64/libvcsmm.so /opt/orclcluster/lib/libskgxn2.so # cp /usr/lib64/libvcsmm.so /opt/orclcluster/lib/libcmdll.so 4. Export ORACLE_BASE and ORACLE_HOME variables. To use OUI-based Installation for CRS Execute the./runinstaller and follow the instructions as given in Oracle s 10g Installation guide. Note If the cluster node names are not displayed on the Specify the host names page of the OUI, verify that the CRS OUI has been patched, the pre-installation steps have been completed correctly, and vcsmm is configured on that cluster. To install OUI-based Installation for the Database Install the 10g database using the OUI. Known issues with the database installer: If root.sh fails to execute: export LD_ASSUME_KERNEL= and then invoke root.sh for database install. Refer Oracle Metalink doc id If root.sh hangs when OCR/Vote Disk is on a raw volume: Oracle patch Number needs to applied before running root.sh of database install. See patch details on metalink for more information. To perform post-install re-linking 1. To perform the following steps on all of the cluster nodes, become oracle user. # su - oracle 286 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

303 Configuring Oracle 10g Release 1 Prerequisites 2. Execute: $ cp $ORACLE_HOME/lib/libskgxp10.so $ORACLE_HOME/lib/libskgxp10.so.orig $ cp /opt/vrtsvcs/rac/lib/skgxp23/64/libskgxp10.so $ORACLE_HOME/lib/libskgxp10.so $ cp $ORACLE_HOME/lib/libskgxn2.so $ORACLE_HOME/lib/libskgxn2.so.orig $ cp /usr/lib64/libvcsmm.so $ORACLE_HOME/lib/libskgxn2.so $ cp $ORACLE_HOME/lib/libodm10.so $ORACLE_HOME/lib/libodm10.so.orig $ cp /opt/vrtsodm/lib64/libodm.so $ORACLE_HOME/lib/libodm10.so $ cp $CRS_HOME/lib/libskgxn2.so $CRS_HOME/lib/libskgxn2.so.orig $ cp /usr/lib64/libvcsmm.so $CRS_HOME/lib/libskgxn2.so $ cp $CRS_HOME/lib/libskgxp10.so $CRS_HOME/lib/libskgxp10.so.orig $ cp /opt/vrtsvcs/rac/lib/skgxp23/64/libskgxp10.so $CRS_HOME/lib/libskgxp10.so $ cp $ORACLE_HOME/lib32/libskgxn2.so $ORACLE_HOME/lib32/libskgxn2.so.orig $ cp /usr/lib/libvcsmm.so $ORACLE_HOME/lib32/libskgxn2.so $ cp $CRS_HOME/lib32/libskgxn2.so $CRS_HOME/lib32/libskgxn2.so.orig $ cp /usr/lib/libvcsmm.so $CRS_HOME/lib32/libskgxn2.so 3. Repeat step 2 on all cluster nodes as oracle user. The libraries must not be copied as root: doing so will break Oracle. To complete post-install configuration 1. Log in as root. $ su - root 2. Execute on any one node of the cluster: # $CRS_HOME/bin/crsctl set css misscount Back up the init.cssd file on all the nodes. # cp /etc/init.d/init.cssd /etc/init.d/init.cssd.orig Chapter 15, Installing Oracle 10g Software on SLES9 287

304 Configuring Oracle 10g Release 1 Prerequisites 4. The /etc/init.d/init.cssd script must be patched on all of the cluster nodes as follows: # cat /etc/init.d/init.cssd sed -e '/Code to/i\ #Veritas Cluster server\ if [ -d "/opt/vrtsvcs" ]\ then \ VENDOR_CLUSTER_DAEMON_CHECK="/opt/VRTSvcs/ops/bin/checkvcs"\ CLINFO="/opt/VRTSvcs/ops/bin/clsinfo\"\ SKGXNLIB=\"/opt/ORCLcluster/lib/libskgxn2.so\"\ fi '> /tmp/init.cssd # 5. Repeat step 4 on all cluster nodes as oracle user. The libraries must not be copied as root: doing so will break Oracle. 6. Copy the modified init.cssd script to each node after backing up the existing init.cssd file. 7. Stop CRS on all nodes: # /etc/init.d/init/init.crs stop 8. Verify that CRS is stopped on all nodes: # ps -ef grep ocssd 9. After successful installation of CRS and Oracle 10g, a RAC database can be configured. Use your own tools or see Creating a Starter Database on page Bring CVM and Private NIC under VCS control. This can be achieved by: Using the VCS Oracle RAC Configuration wizard. See Creating an Oracle Service Group on page 299. Manually editing the VCS configuration file. See Configuring CVM Service Group for Oracle 10g Manually on page 305, and Configuring the PrivNIC Agent on page 369. If the nodes are rebooted before configuring the CVM service group, the services will not start on their own. 288 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

305 Configuring Oracle 10g Release 1 Prerequisites Installing Oracle 10g Release 1 Manually for IA64 Typically, VERITAS recommends using installsfrac -configure to install the Oracle 10g RAC binaries. However, some situations may require manual installation of the Oracle 10g RAC binaries. The following steps are required to install Oracle 10g manually: Patching the CRS OUI Pre-installation tasks OUI-based installation for CRS OUI-based installation for the database Post installation relinking Post installation configuration To patch the CRS OUI 1. Log in as oracle. # su - oracle The Oracle CRS installer must be patched so that it will detect the presence of VCS and use the correct cluster membership. If the OUI has been patched previously using this procedure, then proceed to the next section. 2. Search for the clusterqueries.jar file inside the CRS OUI directory structure. For the current release of OUI, it may be at the following location. Copy it to a temporary location: # cp <installer_directory>/disk1/stage/queries/clusterqueries/ /1/clusterQueries.jar /tmp/jar 3. Unzip this file in some temporary location such as/tmp/jar. This directory should contain coe.tar under /tmp/jar/bin/ia64 directory. # cp /tmp/jar/bin/ia64/coe.tar /tmp/tar 4. Extract this file at another temporary location such as/tmp/tar. # tar -xvf /tmp/tar/coe.tar 5. Backup the original lsnodes.sh script. # cp lsnodes.sh lsnodes.sh.orig Chapter 15, Installing Oracle 10g Software on SLES9 289

306 Configuring Oracle 10g Release 1 Prerequisites 6. Patch the lsnodes.sh file as follows which will create a new file lsnodes_new.sh: # cat lsnodes.sh sed -e '/CM / i #Patch to Check if something is present in central location \ if [ -d /opt/orclcluster/lib ]; then \ CL="/opt/ORCLcluster/lib" \ export LD_LIBRARY_PATH=\$CL:\$LD_LIBRARY_PATH \ cd $base \ ret=`./lsnodes` \ if [ $? = 0 ]; then \ echo "CL"; \ exit; \ fi \ fi \ ' > lsnodes_new.sh The lsnodes.sh file determines whether to use existing cluster membership or ask the user to input a new set of nodes that will form the cluster. This script acknowledges the existence of a cluster only if Oracle's Cluster Manager (oracm) is running. The above patch changes this behavior such that the lsnodes.sh script acknowledges the presence of a cluster if /opt/orclcluster/lib directory is present and lsnodes.sh executes correctly. 7. Overwrite the existing lsnodes.sh by the new lsnodes_new.sh file. # cp lsnodes_new.sh lsnodes.sh 8. Make sure that the lsnodes.sh file has permissions set to 755. # chmod 755 lsnodes.sh 9. Backup the lsnodes_get.sh file # cp lsnodes_get.sh lsnodes_get.sh.orig 10. Patch the lsnodes_get.sh to create a new file lsnodes_get_new.sh: # cat lsnodes_get.sh sed -e ' s_cl="/etc/orclcluster/oracm/lib"_\ if [ -d /opt/orclcluster/lib ]; then \ CL="/opt/ORCLcluster/lib" \ else \ CL="/etc/ORCLcluster/oracm/lib" \ fi _ '> lsnodes_get_new.sh 290 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

307 Configuring Oracle 10g Release 1 Prerequisites The lsnodes_get.sh script queries the lsnodes command for the cluster members. This patch sets the CL variable to /opt/orclcluster/lib which is exported as the LD_LIBRARY_PATH value. This allows the VERITAS vcsmm to be used while determining the cluster membership. 11. Overwrite the existing lsnodes_get.sh by the new lsnodes_get_new.sh file. # cp lsnodes_get_new.sh lsnodes_get.sh 12. Make sure that the lsnodes_get.sh file has permissions set to 755. # chmod 755 lsnodes_get.sh 13. Delete lsnodes_new.sh and lsnodes_get_new.sh files. # rm -f lsnodes_new.sh lsnodes_get_new.sh 14. Re-create the coe.tar from the /tmp/tar location: # tar cvf /tmp/coe.tar -C /tmp/tar/. 15. Overwrite the old coe.tar present under /tmp/jar/bin/ia64 directory. 16. Create the jar file as follows: # jar -cmf /tmp/clusterqueries.jar -C /tmp/jar. Make sure that the -M option is passed while creating the jar. The jar files do not require a manifest file. 17. Copy this patched jar file back into the OUI location. Chapter 15, Installing Oracle 10g Software on SLES9 291

308 Configuring Oracle 10g Release 1 Prerequisites To complete the pre-installation tasks Execute procedure as Oracle user. 1. Use the steps in Configuring Oracle 10g Release 1 Prerequisites on page 265 to set up the required mounts (Oracle home, CRS home and CRS, Vote disk) and other pre-requisites for Oracle 10g: a. Creating OS Oracle User and Group on page 266 b. Creating CRS_HOME on page 266 c. Creating Volumes or Directories for OCR and Vote Disk on page 268 d. Configuring Private IP Addresses on All Cluster Nodes on page 270 e. Obtaining Public Virtual IP Addresses for Use by Oracle on page Create /opt/orclcluster/lib directory on all cluster nodes. # mkdir /opt/orclcluster/lib 3. Copy /usr/lib64/libvcsmm.so to /opt/orclcluster/lib directory as libcmdll.so on all cluster nodes. # cp /usr/lib64/libvcsmm.so /opt/orclcluster/lib 4. Export ORACLE_BASE and ORACLE_HOME variables. To use OUI-based Installation for CRS Install the CRS binaries and follow the instructions as given in Oracle s 10g Installation guide. Note If the cluster node names are not displayed on the Specify the host names page of the OUI, verify that the CRS OUI has been patched, the pre-installation steps have been completed correctly, and vcsmm is configured on that cluster. To install OUI-based Installation for the Database Install the 10g database binaries using the OUI. No special steps are needed for an Oracle 10g database installation. 292 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

309 Configuring Oracle 10g Release 1 Prerequisites To perform post-install re-linking Perform the following steps as oracle user on all of the cluster nodes to relink VERITAS libraries with Oracle s binaries: # su - oracle $ cp $ORACLE_HOME/lib/libskgxp10.so $ORACLE_HOME/lib/libskgxp10.so.orig $ cp /opt/vrtsvcs/ops/lib/skgxp23/64/libskgxp10.so $ORACLE_HOME/lib/libskgxp10.so $cp $ORACLE_HOME/lib/libskgxn2.so $ORACLE_HOME/lib/libskgxn2.so.orig $ cp /usr/lib64/libvcsmm.so $ORACLE_HOME/lib/libskgxn2.so $ cp $ORACLE_HOME/lib/libodm10.so $ORACLE_HOME/lib/libodm10.so.orig $ cp /usr/lib/libodm.so $ORACLE_HOME/lib/libodm10.so $ cp $CRS_HOME/lib/libskgxn2.so $CRS_HOME/lib/libskgxn2.so.orig $ cp /usr/lib64/libvcsmm.so $CRS_HOME/lib/libskgxn2.so To complete post-install configuration 1. Log in as root. $ su - root 2. Execute on any one node of the cluster: # $CRS_HOME/bin/crsctl set css misscount The /etc/init.d/init.cssd script must be patched on all of the cluster nodes: # cat /etc/init.d/init.cssd sed -e '/Code to/i\ #Veritas Cluster server\ if [ -d "/opt/vrtsvcs" ]\ then \ VENDOR_CLUSTER_DAEMON_CHECK="/opt/VRTSvcs/ops/bin/checkvcs"\ CLINFO="/opt/VRTSvcs/ops/bin/clsinfo\"\ fi '> /tmp/init.cssd 4. Install Oracle patch # See Applying Oracle 10g Patchsets on SLES9 on page After successful installation of CRS and Oracle 10g, a RAC database can be configured. See Creating a Starter Database on page 351. Chapter 15, Installing Oracle 10g Software on SLES9 293

310 Applying Oracle 10g Patchsets on SLES9 6. Bring CVM and Private NIC under VCS control. This can be achieved by: Using the VCS Oracle RAC Configuration wizard. See Creating an Oracle Service Group on page 299. Manually editing the VCS configuration file. See Configuring CVM Service Group for Oracle 10g Manually on page 305, and Configuring the PrivNIC Agent on page 369. If the nodes are rebooted before configuring the CVM service group, the services will not start on their own. Applying Oracle 10g Patchsets on SLES9 Storage Foundation for Oracle RAC supports Oracle 10g or greater. Restore Oracle's ODM library Perform the following on all the nodes in the cluster. 1. Login as oracle user. # su - oracle 2. Go to $ORACLE_HOME/lib directory. $ cd $ORACLE_HOME/lib 3. Replace VERITAS ODM library with Oracle's ODM library $ rm libodm10.so $ ln -s libodmd10.so libodm10.so To apply an Oracle 10g patchset 1. Log in as the oracle user. # su - oracle 2. Apply the patchset. Note Refer to the Oracle 10g documentation or relevant release notes for applying the patchset. 3. Perform post-upgrade relinking of the VERITAS libraries. 294 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

311 Applying Oracle 10g Patchsets on SLES9 Relinking Oracle with VERITAS Libraries After an Upgrade After applying an Oracle patchset, the Oracle libraries must be relinked. This post-upgrade reconfiguration allows Oracle to use the VERITAS RAC libraries rather than the native Oracle libraries. This is a required step in the Oracle RAC installation process as Oracle will not function properly in an SFRAC environment without the VERITAS libraries. To relink VERITAS libraries 1. Log in as root. $ su - root 2. Export the ORACLE_BASE and ORACLE_HOME environment variables. 3. Change directories: # cd /opt/vrts/install 4. Start the installer: #./installsfrac -configure 5. The installer checks systems: a. When the installer prompts, enter the system names separated by spaces. The installer checks both systems for communication and creates a log directory on the second system in /var/tmp/installsfracxxxxxxxxxx. b. When the initial system check is successfully completed, press Enter to continue. 6. The installer proceeds to verify the license keys. a. Enter additional licenses now if any are needed. b. When the licenses are successfully verified, press Enter to continue. Chapter 15, Installing Oracle 10g Software on SLES9 295

312 Applying Oracle 10g Patchsets on SLES9 7. The installer presents task choices for installing and configuring, depending on the operating system you are running. Example: This program enables you to perform one of the following tasks: 1) Install Oracle 10g. 2) Relink SFRAC for Oracle 10g. 3) Configure different components of SFRAC Enter your choice [1-3]: [?] Select Relink SFRAC for Oracle 10g. The installer proceeds to check environment settings. 8. Set Oracle directories. a. When prompted, enter the directory name for CRS_HOME relative to ORACLE_BASE. b. When prompted, enter the directory name for ORACLE_HOME relative to ORACLE_BASE. The Installer proceeds to validate ORACLE_HOME and check the node list. c. Press Enter to continue. 9. Set user accounts. a. Enter Oracle Unix User Account when prompted. The installer checks for the user on all systems. b. Enter Oracle Inventory group when prompted. The installer checks for group existence on all systems c. Press Enter to continue. 10. The success of the configuration is reported. Press Enter to continue. The configuration summary is saved at: /opt/vrts/install/logs/installsfracxxxxxxxxxx.summary The installsfrac log is saved at: /opt/vrts/install/logs/installsfracxxxxxxxxxx.log 296 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

313 Applying Oracle 10g Patchsets on SLES9 To complete the patchset installation 1. Check if Oracle is using the VERITAS ODM library. After starting Oracle instances, you can confirm that Oracle is using the VERITAS ODM libraries by examining the Oracle alert file, alert_$oracle_sid.log. Look for the line that reads: Oracle instance running with ODM: VERITAS xxx ODM Library, Version x.x 2. Create an Oracle database on shared storage. Use your own tools or see Creating a Starter Database on page 351 to create a starter database. Chapter 15, Installing Oracle 10g Software on SLES9 297

314 Applying Oracle 10g Patchsets on SLES9 298 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

315 Configuring VCS Service Groups for Oracle 10g on SLES9 16 You can set up VCS to automate the Oracle RAC environment: Creating an Oracle Service Group on page 299 Configuring CVM Service Group for Oracle 10g Manually on page 305 Location of VCS Log Files on page 309 Creating an Oracle Service Group The Oracle 10g RAC configuration wizard guides you through the modification the existing CVM group to include CRS Application, PrivNIC, CFSMount and CVMVolDg resources. Before starting the wizard, verify that your Oracle installation can be configured: Oracle 10g CRS must be running Oracle RAC instances and listeners must be running on all cluster nodes The database files of all instances must be on shared storage, in a cluster file system or shared raw volumes Make sure you have available the information the wizard needs as it proceeds: Names of the database instances to be configured Private IP addresses used by Oracle 10g CRS 10g CRS OCR location Vote disk location Establishing Graphical Access for the Wizard The configuration wizard requires graphical access to the VCS systems where you want to configure service groups. If your VCS systems do not have monitors, or if you want to run the wizards from a remote UNIX system, do the following: 299

316 Creating an Oracle Service Group To establish graphical access from a remote system 1. From the remote system, (jupiter, for example), run: # xhost + 2. Do one of the following, depending on your shell: For Bourne shell (bash, sh, or ksh), run this step on one of the systems where the wizard is to run, for example galaxy: # export DISPLAY=jupiter:0.0 For C shell (csh), run this step # setenv DISPLAY jupiter: Verify that the DISPLAY environment variable has been updated: # echo $DISPLAY jupiter:0.0 Starting the Configuration Wizard The configuration wizard for Oracle 10g RAC is started at the command line. 1. Log on to one of your VCS systems as root. 2. Start the configuration wizard. # /opt/vrtsvcs/bin/hawizard rac10g 300 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

317 Creating an Oracle Service Group The Welcome Window The VCS RAC Oracle 10g Configuration wizard starts with a Welcome window that highlights the prerequisites for configuration and the information you will need to complete the configuration. If your configuration does not meet the configuration requirements, you can stop the wizard by pressing Cancel. Take the necessary steps to meet the requirements and start the wizard again (see Starting the Configuration Wizard above). The Wizard Discovers the RAC Configuration If you are ready to configure Oracle service group, press Next on the Welcome screen. The wizard begins discovering the current Oracle RAC information before proceeding with the next screen. If the wizard does not find all databases in the cluster, it halts with an error, indicating the problem. Press Cancel, and start the wizard again after you correct the problem. Database Selection Screen The databases and their instances running on the cluster are listed on this screen. Highlight only one of the databases, if more than one are listed, and click Next. Chapter 16, Configuring VCS Service Groups for Oracle 10g on SLES9 301

318 Creating an Oracle Service Group Private NIC Configuration Screen Configure the Private IP address that is used by Oracle 10g CRS and also select the NIC's that should be used by the Private NIC agent to make this IP address highly available. Specify this information for all the cluster nodes. CRS Configuration Screen Specify the OCR and Vote disk locations. 302 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

319 Creating an Oracle Service Group Database Configuration Screen If you have installed the database on a cluster file system, the wizard discovers the mount point and displays it on the Database Configuration screen. If the database exists on raw volumes, the wizard discovers the volumes. The Wizard will also discover the OCR and Vote disk mounts if they are present on CFS and display them. If the OCR and Voting disk are on raw volumes, the wizard discovers the volumes. You can confirm the mount options displayed, or you can modify them. Service Group Summary Screen After you have configured the database and listener resources, the wizard displays the configuration on a Summary screen. Click Finish to complete the configuration. Chapter 16, Configuring VCS Service Groups for Oracle 10g on SLES9 303

320 Creating an Oracle Service Group Completing the RAC 10g Configuration Screen Click the Bring the service groups online check-box to start the newly added resources after exiting the wizard. Click Close to exit the wizard. To verify the state of newly added resources 1. Use hargrp -state to check status of the cvm group. 2. Use hargrp -state to check status of resources. To reboot the cluster nodes A reboot is required at this stage so that CRS and Oracle RAC database instances use VERITAS libraries. 1. Stop CRS using /etc/init.d/init.crs stop on all nodes. 2. Stop VCS using /opt/vrtsvcs/bin/hastop -local on all nodes. 3. Reboot all nodes. 304 VERITAS Storage Foundation for Oracle RAC Installation and Configuration Guide

VERITAS Storage Foundation 4.0 for Oracle RAC. Oz Melamed E&M Computing

VERITAS Storage Foundation 4.0 for Oracle RAC. Oz Melamed E&M Computing VERITAS Storage Foundation 4.0 for Oracle RAC Oz Melamed E&M Computing Cluster Basics Private Network LLT Failover and Switchover Failover Configurations Asymmetric Failover Configurations Symmetric What

More information

Veritas Storage Foundation for Oracle RAC Administrator's Guide

Veritas Storage Foundation for Oracle RAC Administrator's Guide Veritas Storage Foundation for Oracle RAC Administrator's Guide Linux 5.1 Platform Release 1 Veritas Storage Foundation for Oracle RAC Administrator's Guide The software described in this book is furnished

More information

Veritas Storage Foundation for Oracle RAC Administrator's Guide

Veritas Storage Foundation for Oracle RAC Administrator's Guide Veritas Storage Foundation for Oracle RAC Administrator's Guide Solaris 5.1 Service Pack 1 Veritas Storage Foundation for Oracle RAC Administrator's Guide The software described in this book is furnished

More information

Storage Foundation for Oracle RAC 7.2 Administrator's Guide - AIX

Storage Foundation for Oracle RAC 7.2 Administrator's Guide - AIX Storage Foundation for Oracle RAC 7.2 Administrator's Guide - AIX Storage Foundation for Oracle RAC Administrator's Guide Last updated: 2018-01-23 Document version: 7.2 Rev 0 Legal Notice Copyright 2017

More information

Symantec Storage Foundation for Oracle RAC Administrator's Guide

Symantec Storage Foundation for Oracle RAC Administrator's Guide Symantec Storage Foundation for Oracle RAC Administrator's Guide AIX 6.2 August 2014 Symantec Storage Foundation for Oracle RAC Administrator's Guide The software described in this book is furnished under

More information

Copyright 2003 VERITAS Software Corporation. All rights reserved. VERITAS, the VERITAS Logo and all other VERITAS product names and slogans are

Copyright 2003 VERITAS Software Corporation. All rights reserved. VERITAS, the VERITAS Logo and all other VERITAS product names and slogans are 1 Copyright 2003 VERITAS Software Corporation. All rights reserved. VERITAS, the VERITAS Logo and all other VERITAS product names and slogans are trademarks or registered trademarks of VERITAS Software

More information

Veritas Cluster Server Installation Guide

Veritas Cluster Server Installation Guide Veritas Cluster Server Installation Guide Linux 6.0.2 May 2013 Veritas Cluster Server Installation Guide The software described in this book is furnished under a license agreement and may be used only

More information

Storage Foundation Cluster File System High Availability 7.3 Configuration and Upgrade Guide - Linux

Storage Foundation Cluster File System High Availability 7.3 Configuration and Upgrade Guide - Linux Storage Foundation Cluster File System High Availability 7.3 Configuration and Upgrade Guide - Linux Last updated: 2017-05-31 Legal Notice Copyright 2017 Veritas Technologies LLC. All rights reserved.

More information

Symantec ApplicationHA Release Notes

Symantec ApplicationHA Release Notes Symantec ApplicationHA Release Notes Linux on KVM 6.0 December 2011 Symantec ApplicationHA Release Notes The software described in this book is furnished under a license agreement and may be used only

More information

Veritas Cluster Server Installation Guide

Veritas Cluster Server Installation Guide Veritas Cluster Server Installation Guide Linux 5.1 Service Pack 1 Veritas Cluster Server Installation Guide The software described in this book is furnished under a license agreement and may be used only

More information

Veritas Storage Foundation and High Availability Solutions Application Note: Support for HP-UX Integrity Virtual Machines

Veritas Storage Foundation and High Availability Solutions Application Note: Support for HP-UX Integrity Virtual Machines Veritas Storage Foundation and High Availability Solutions Application Note: Support for HP-UX Integrity Virtual Machines HP-UX 11i v3 5.0.1 Veritas Storage Foundation and High Availability Solutions Application

More information

FREQUENTLY ASKED QUESTIONS: VCS I/O FENCING. Frequently Asked Questions: Veritas Cluster Server I/O Fencing

FREQUENTLY ASKED QUESTIONS: VCS I/O FENCING. Frequently Asked Questions: Veritas Cluster Server I/O Fencing FREQUENTLY ASKED QUESTIONS: VCS I/O FENCING Frequently Asked Questions: Veritas Cluster Server I/O Fencing Symantec Technical Network Frequently Asked Questions Version: 2.0 November 23, 2009 Contents

More information

Veritas Cluster Server Installation Guide

Veritas Cluster Server Installation Guide Veritas Cluster Server Installation Guide HP-UX 11i v3 5.1 Service Pack 1 Veritas Cluster Server Installation Guide The software described in this book is furnished under a license agreement and may be

More information

Data Sheet: Storage Management Veritas Storage Foundation for Oracle RAC from Symantec Manageability and availability for Oracle RAC databases

Data Sheet: Storage Management Veritas Storage Foundation for Oracle RAC from Symantec Manageability and availability for Oracle RAC databases Manageability and availability for Oracle RAC databases Overview Veritas Storage Foundation for Oracle RAC from Symantec offers a proven solution to help customers implement and manage highly available

More information

Deep Dive: Cluster File System 6.0 new Features & Capabilities

Deep Dive: Cluster File System 6.0 new Features & Capabilities Deep Dive: Cluster File System 6.0 new Features & Capabilities Carlos Carrero Technical Product Manager SA B13 1 Agenda 1 Storage Foundation Cluster File System Architecture 2 Producer-Consumer Workload

More information

Veritas Storage Foundation and High Availability Solutions Quick Recovery and Microsoft Clustering Solutions Guide for Microsoft Exchange

Veritas Storage Foundation and High Availability Solutions Quick Recovery and Microsoft Clustering Solutions Guide for Microsoft Exchange Veritas Storage Foundation and High Availability Solutions Quick Recovery and Microsoft Clustering Solutions Guide for Microsoft Exchange Windows Server 2003 Windows Server 2008 5.1 Veritas Storage Foundation

More information

ExpressCluster X 3.2 for Linux

ExpressCluster X 3.2 for Linux ExpressCluster X 3.2 for Linux Installation and Configuration Guide 5/23/2014 2nd Edition Revision History Edition Revised Date Description 1st 2/19/2014 New manual 2nd 5/23/2014 Corresponds to the internal

More information

Veritas Cluster Server Application Note: Support for HP-UX Integrity Virtual Machines. HP-UX 11i v2, HP-UX 11i v3

Veritas Cluster Server Application Note: Support for HP-UX Integrity Virtual Machines. HP-UX 11i v2, HP-UX 11i v3 Veritas Cluster Server Application Note: Support for HP-UX Integrity Virtual Machines HP-UX 11i v2, HP-UX 11i v3 Augut 2009 Application Note: Veritas Cluster Server Support for HP-UX Integrity Virtual

More information

ExpressCluster X 3.1 for Linux

ExpressCluster X 3.1 for Linux ExpressCluster X 3.1 for Linux Installation and Configuration Guide 10/11/2011 First Edition Revision History Edition Revised Date Description First 10/11/2011 New manual Copyright NEC Corporation 2011.

More information

TECHNICAL WHITE PAPER. Using Stateless Linux with Veritas Cluster Server. Linux

TECHNICAL WHITE PAPER. Using Stateless Linux with Veritas Cluster Server. Linux TECHNICAL WHITE PAPER Using Stateless Linux with Veritas Cluster Server Linux Pranav Sarwate, Assoc SQA Engineer Server Availability and Management Group Symantec Technical Network White Paper Content

More information

Administrator s Guide. StorageX 7.8

Administrator s Guide. StorageX 7.8 Administrator s Guide StorageX 7.8 August 2016 Copyright 2016 Data Dynamics, Inc. All Rights Reserved. The trademark Data Dynamics is the property of Data Dynamics, Inc. StorageX is a registered trademark

More information

Administration of Veritas Cluster Server 6.0 for UNIX Study Guide

Administration of Veritas Cluster Server 6.0 for UNIX Study Guide Administration of Veritas Cluster Server 6.0 for UNIX Study Guide The following tables list the Symantec SCS Certification exam objectives for the Administration of Veritas Cluster Server 6.0 for UNIX

More information

VERITAS Cluster Server Enterprise Agent 4.1 for DB2

VERITAS Cluster Server Enterprise Agent 4.1 for DB2 VERITAS Cluster Server Enterprise Agent 4.1 for DB2 Installation and Configuration Guide Linux N16863H July 2005 Disclaimer The information contained in this publication is subject to change without notice.

More information

Veritas Cluster Server Agent for IBM DB2 HADR Installation and Configuration Guide

Veritas Cluster Server Agent for IBM DB2 HADR Installation and Configuration Guide Veritas Cluster Server Agent for IBM DB2 HADR Installation and Configuration Guide AIX, Linux, Solaris 5.0 N19308C Veritas Cluster Server Agent for IBM DB2 HADR Installation and Configuration Guide Copyright

More information

Storage Foundation and High Availability 7.0 Configuration and Upgrade Guide - Solaris

Storage Foundation and High Availability 7.0 Configuration and Upgrade Guide - Solaris Storage Foundation and High Availability 7.0 Configuration and Upgrade Guide - Solaris January 2016 Storage Foundation and High Availability Configuration and Upgrade Guide The software described in this

More information

VERITAS Storage Foundation 4.0 TM for Databases

VERITAS Storage Foundation 4.0 TM for Databases VERITAS Storage Foundation 4.0 TM for Databases Powerful Manageability, High Availability and Superior Performance for Oracle, DB2 and Sybase Databases Enterprises today are experiencing tremendous growth

More information

Administration of Symantec Cluster Server 6.1 for UNIX Study Guide

Administration of Symantec Cluster Server 6.1 for UNIX Study Guide Administration of Symantec Cluster Server 6.1 for UNIX Study Guide The following tables list the Symantec SCS Certification exam objectives for the Symantec Cluster Server 6.1 for UNIX Technical Assessment

More information

ExpressCluster X 2.0 for Linux

ExpressCluster X 2.0 for Linux ExpressCluster X 2.0 for Linux Installation and Configuration Guide 03/31/2009 3rd Edition Revision History Edition Revised Date Description First 2008/04/25 New manual Second 2008/10/15 This manual has

More information

Veritas Volume Replicator Administrator s Guide

Veritas Volume Replicator Administrator s Guide Veritas Volume Replicator Administrator s Guide Linux 5.0 N18482H Veritas Volume Replicator Administrator s Guide Copyright 2006 Symantec Corporation. All rights reserved. Veritas Volume Replicator 5.0

More information

Veritas Storage Foundation and High Availability Solutions Getting Started Guide - Linux

Veritas Storage Foundation and High Availability Solutions Getting Started Guide - Linux Veritas Storage Foundation and High Availability Solutions 6.0.4 Getting Started Guide - Linux September 2013 Veritas Storage Foundation and High Availability Solutions Getting Started Guide The software

More information

Bull. HACMP 4.4 Programming Locking Applications AIX ORDER REFERENCE 86 A2 59KX 02

Bull. HACMP 4.4 Programming Locking Applications AIX ORDER REFERENCE 86 A2 59KX 02 Bull HACMP 4.4 Programming Locking Applications AIX ORDER REFERENCE 86 A2 59KX 02 Bull HACMP 4.4 Programming Locking Applications AIX Software August 2000 BULL CEDOC 357 AVENUE PATTON B.P.20845 49008

More information

Veritas Storage Foundation and High Availability Solutions Microsoft Clustering Solutions Guide for Microsoft SQL 2008

Veritas Storage Foundation and High Availability Solutions Microsoft Clustering Solutions Guide for Microsoft SQL 2008 Veritas Storage Foundation and High Availability Solutions Microsoft Clustering Solutions Guide for Microsoft SQL 2008 Windows Server 2003 Windows Server 2008 5.1 Application Pack 1 Veritas Storage Foundation

More information

Veritas Storage Foundation Cluster File System Administrator s Guide

Veritas Storage Foundation Cluster File System Administrator s Guide Veritas Storage Foundation Cluster File System Administrator s Guide AIX 5.0 N18487J Veritas Storage Foundation Cluster File System Administrator s Guide Copyright 2006 Symantec Corporation. All rights

More information

Administrator s Guide. StorageX 8.0

Administrator s Guide. StorageX 8.0 Administrator s Guide StorageX 8.0 March 2018 Copyright 2018 Data Dynamics, Inc. All Rights Reserved. The trademark Data Dynamics is the property of Data Dynamics, Inc. StorageX is a registered trademark

More information

Veritas InfoScale Enterprise for Oracle Real Application Clusters (RAC)

Veritas InfoScale Enterprise for Oracle Real Application Clusters (RAC) Veritas InfoScale Enterprise for Oracle Real Application Clusters (RAC) Manageability and availability for Oracle RAC databases Overview Veritas InfoScale Enterprise for Oracle Real Application Clusters

More information

Application Note: Veritas High Availability solution for DLP Enforce Server. Windows

Application Note: Veritas High Availability solution for DLP Enforce Server. Windows Application Note: Veritas High Availability solution for DLP Enforce Server Windows December 2011 Application Note: Veritas High Availability solution for DLP Enforce Server Copyright 2011 Symantec Corporation.

More information

Administrator s Guide. StorageX 7.6

Administrator s Guide. StorageX 7.6 Administrator s Guide StorageX 7.6 May 2015 Copyright 2015 Data Dynamics, Inc. All Rights Reserved. The trademark Data Dynamics is the property of Data Dynamics, Inc. StorageX is a registered trademark

More information

EXPRESSCLUSTER X 3.3 for Linux

EXPRESSCLUSTER X 3.3 for Linux EXPRESSCLUSTER X 3.3 for Linux Installation and Configuration Guide 04/10/2017 5th Edition Revision History Edition Revised Date Description 1st 02/09/2015 New manual. 2nd 06/30/2015 Corresponds to the

More information

Symantec Storage Foundation and High Availability Solutions Installation Guide - Linux

Symantec Storage Foundation and High Availability Solutions Installation Guide - Linux Symantec Storage Foundation and High Availability Solutions 6.2.1 Installation Guide - Linux Maintenance Release RHEL 6, 7, OL 6, 7, SLES 11 April 2015 Symantec Storage Foundation and High Availability

More information

W H I T E P A P E R : T E C H N I C AL. Symantec High Availability Solution for Oracle Enterprise Manager Grid Control 11g and Cloud Control 12c

W H I T E P A P E R : T E C H N I C AL. Symantec High Availability Solution for Oracle Enterprise Manager Grid Control 11g and Cloud Control 12c W H I T E P A P E R : T E C H N I C AL Symantec High Availability Solution for Oracle Enterprise Manager Grid Control 11g and Cloud Control 12c Table of Contents Symantec s solution for ensuring high availability

More information

VERITAS Volume Manager for Windows 2000 VERITAS Cluster Server for Windows 2000

VERITAS Volume Manager for Windows 2000 VERITAS Cluster Server for Windows 2000 WHITE PAPER VERITAS Volume Manager for Windows 2000 VERITAS Cluster Server for Windows 2000 VERITAS CAMPUS CLUSTER SOLUTION FOR WINDOWS 2000 WHITEPAPER 1 TABLE OF CONTENTS TABLE OF CONTENTS...2 Overview...3

More information

Veritas Storage Foundation and High Availability Solutions Application Note: Support for HP-UX Integrity Virtual Machines

Veritas Storage Foundation and High Availability Solutions Application Note: Support for HP-UX Integrity Virtual Machines Veritas Storage Foundation and High Availability Solutions Application Note: Support for HP-UX Integrity Virtual Machines HP-UX 11i v3 5.0.1 Veritas Storage Foundation and High Availability Solutions Application

More information

Veritas Storage Foundation for Oracle RAC from Symantec

Veritas Storage Foundation for Oracle RAC from Symantec Veritas Storage Foundation for Oracle RAC from Symantec Manageability, performance and availability for Oracle RAC databases Data Sheet: Storage Management Overviewview offers a proven solution to help

More information

Using VERITAS Volume Replicator for Disaster Recovery of a SQL Server Application Note

Using VERITAS Volume Replicator for Disaster Recovery of a SQL Server Application Note Using VERITAS Volume Replicator for Disaster Recovery of a SQL Server Application Note February 2002 30-000632-011 Disclaimer The information contained in this publication is subject to change without

More information

Veritas Storage Foundation from Symantec

Veritas Storage Foundation from Symantec Simplified, Scalable and Cost-Effective Storage Management Overviewview provides a complete solution for heterogeneous online storage management. Based on the industry-leading Veritas Volume Manager from

More information

Veritas Dynamic Multi-Pathing readme

Veritas Dynamic Multi-Pathing readme Veritas Dynamic Multi-Pathing readme Linux DMP 5.1 Rolling Patch 1 Patch 0 Veritas Dynamic Multi-Pathing Readme The software described in this book is furnished under a license agreement and may be used

More information

Veritas InfoScale 7.4 Virtualization Guide - Linux

Veritas InfoScale 7.4 Virtualization Guide - Linux Veritas InfoScale 7.4 Virtualization Guide - Linux Last updated: 2018-05-30 Legal Notice Copyright 2018 Veritas Technologies LLC. All rights reserved. Veritas and the Veritas Logo are trademarks or registered

More information

Managing Oracle Real Application Clusters. An Oracle White Paper January 2002

Managing Oracle Real Application Clusters. An Oracle White Paper January 2002 Managing Oracle Real Application Clusters An Oracle White Paper January 2002 Managing Oracle Real Application Clusters Overview...3 Installation and Configuration...3 Oracle Software Installation on a

More information

Veritas Storage Foundation for Cluster File System Administrator s Guide

Veritas Storage Foundation for Cluster File System Administrator s Guide Veritas Storage Foundation for Cluster File System Administrator s Guide Linux for IBM Power 5.0 Release Update 3 Veritas Storage Foundation for Cluster File System Administrator's Guide The software described

More information

Veritas Storage Foundation Cluster File System Installation Guide

Veritas Storage Foundation Cluster File System Installation Guide Veritas Storage Foundation Cluster File System Installation Guide Linux 5.1 Service Pack 1 Veritas Storage Foundation Cluster File System Installation Guide The software described in this book is furnished

More information

VERITAS Cluster Server Enterprise Agent 1.3 for NetBackup

VERITAS Cluster Server Enterprise Agent 1.3 for NetBackup VERITAS Cluster Server Enterprise Agent 1.3 for NetBackup Installation and Configuration Guide Solaris July 2003 N10951F Disclaimer The information contained in this publication is subject to change without

More information

Storage Foundation and High Availability Solutions HA and Disaster Recovery Solutions Guide for Microsoft SharePoint 2013

Storage Foundation and High Availability Solutions HA and Disaster Recovery Solutions Guide for Microsoft SharePoint 2013 Storage Foundation and High Availability Solutions HA and Disaster Recovery Solutions Guide for Microsoft SharePoint 2013 Windows 7.1 April 2016 Storage Foundation and High Availability Solutions HA and

More information

Veritas Storage Foundation and High Availability Solutions Getting Started Guide

Veritas Storage Foundation and High Availability Solutions Getting Started Guide Veritas Storage Foundation and High Availability Solutions Getting Started Guide Windows Server 2008 (x64), Windows Server 2008 R2 (x64) 6.0 21213723 (October 2011) Veritas Storage Foundation and High

More information

Cluster Server 7.1 Configuration and Upgrade Guide - Linux

Cluster Server 7.1 Configuration and Upgrade Guide - Linux Cluster Server 7.1 Configuration and Upgrade Guide - Linux June 2016 Cluster Server Configuration and Upgrade Guide Last updated: 2016-06-09 Document version: 7.1 Rev 1 Legal Notice Copyright 2016 Veritas

More information

Cluster Server 7.4 Configuration and Upgrade Guide - Linux

Cluster Server 7.4 Configuration and Upgrade Guide - Linux Cluster Server 7.4 Configuration and Upgrade Guide - Linux Last updated: 2018-07-31 Legal Notice Copyright 2018 Veritas Technologies LLC. All rights reserved. Veritas and the Veritas Logo are trademarks

More information

EXPRESSCLUSTER X 4.0 for Linux

EXPRESSCLUSTER X 4.0 for Linux EXPRESSCLUSTER X 4.0 for Linux Installation and Configuration Guide April 17, 2018 1st Edition Revision History Edition Revised Date Description 1st Apr 17, 2018 New manual. Copyright NEC Corporation 2018.

More information

Maximum Availability Architecture: Overview. An Oracle White Paper July 2002

Maximum Availability Architecture: Overview. An Oracle White Paper July 2002 Maximum Availability Architecture: Overview An Oracle White Paper July 2002 Maximum Availability Architecture: Overview Abstract...3 Introduction...3 Architecture Overview...4 Application Tier...5 Network

More information

Veritas Storage Foundation and High Availability Solutions Quick Recovery Solutions Guide for Microsoft SQL 2008

Veritas Storage Foundation and High Availability Solutions Quick Recovery Solutions Guide for Microsoft SQL 2008 Veritas Storage Foundation and High Availability Solutions Quick Recovery Solutions Guide for Microsoft SQL 2008 Windows Server 2003, Windows Server 2008 5.1 Application Pack 1 Veritas Storage Foundation

More information

VERITAS Replication Exec version 3.1 for Windows Clustering Reference Guide

VERITAS Replication Exec version 3.1 for Windows Clustering Reference Guide VERITAS Replication Exec version 3.1 for Windows Clustering Reference Guide N163098 December 2004 Disclaimer The information contained in this publication is subject to change without notice. VERITAS Software

More information

Installing and Administering a Satellite Environment

Installing and Administering a Satellite Environment IBM DB2 Universal Database Installing and Administering a Satellite Environment Version 8 GC09-4823-00 IBM DB2 Universal Database Installing and Administering a Satellite Environment Version 8 GC09-4823-00

More information

EXPRESSCLUSTER X SingleServerSafe 3.3 for Linux. Configuration Guide. 10/02/2017 6th Edition

EXPRESSCLUSTER X SingleServerSafe 3.3 for Linux. Configuration Guide. 10/02/2017 6th Edition EXPRESSCLUSTER X SingleServerSafe 3.3 for Linux Configuration Guide 10/02/2017 6th Edition Revision History Edition Revised Date Description 1st 02/09/2015 New manual 2nd 06/30/2015 Corresponds to the

More information

Veritas NetBackup Upgrade Quick Start Guide

Veritas NetBackup Upgrade Quick Start Guide Veritas NetBackup Upgrade Quick Start Guide Veritas NetBackup Upgrade Quick Start Guide Last updated: 2018-02-16 Document version:netbackup 8.1.1 Legal Notice Copyright 2018 Veritas Technologies LLC. All

More information

VERITAS Cluster Server 4.0 Traffic Director

VERITAS Cluster Server 4.0 Traffic Director VERITAS Cluster Server 4.0 Traffic Director Installation Guide Linux August 2004 Disclaimer The information contained in this publication is subject to change without notice. VERITAS Software Corporation

More information

Simplified Storage Migration for Microsoft Cluster Server

Simplified Storage Migration for Microsoft Cluster Server Simplified Storage Migration for Microsoft Cluster Server Using VERITAS Volume Manager for Windows 2000 with Microsoft Cluster Server V E R I T A S W H I T E P A P E R June 2001 Table of Contents Overview...................................................................................1

More information

Veritas Cluster Server 6.0

Veritas Cluster Server 6.0 Veritas Cluster Server 6.0 New Features and Capabilities Anthony Herr Product Manager - Availability Products What does function does VCS perform? High Availability Ensure an application, in either a physical

More information

VRT-202 Veritas Cluster Server for UNIX/Linux Administration

VRT-202 Veritas Cluster Server for UNIX/Linux Administration VRT-202 Veritas Cluster Server for UNIX/Linux Administration COURSE DESCRIPTION The VERITAS Cluster Server course is designed for the IT professional tasked with installing, configuring, and maintaining

More information

VERITAS Cluster Server Agent 1.0 for IBM HTTP Server

VERITAS Cluster Server Agent 1.0 for IBM HTTP Server VERITAS Cluster Server Agent 1.0 for IBM HTTP Server Installation and Configuration Guide Solaris February 2003 Disclaimer The information contained in this publication is subject to change without notice.

More information

This five-day, instructor-led, hands-on class covers how to use Veritas Cluster Server to manage applications in a high availability environment.

This five-day, instructor-led, hands-on class covers how to use Veritas Cluster Server to manage applications in a high availability environment. Veritas Cluster Server 6.0 for UNIX: Administration Day(s): 5 Course Code: HA0434 Overview The Veritas Cluster Server 6.0 for UNIX: Administration course is designed for the IT professional tasked with

More information

VERITAS Volume Manager for Windows 2000

VERITAS Volume Manager for Windows 2000 VERITAS Volume Manager for Windows 2000 Advanced Storage Management Technology for the Windows 2000 Platform In distributed client/server environments, users demand that databases, mission-critical applications

More information

Veritas Cluster Server Installation Guide

Veritas Cluster Server Installation Guide Veritas Cluster Server Installation Guide Solaris 5.1 Veritas Cluster Server Installation Guide The software described in this book is furnished under a license agreement and may be used only in accordance

More information

Veritas NetBackup for Microsoft SQL Server Administrator's Guide

Veritas NetBackup for Microsoft SQL Server Administrator's Guide Veritas NetBackup for Microsoft SQL Server Administrator's Guide for Windows Release 8.1.1 Veritas NetBackup for Microsoft SQL Server Administrator's Guide Last updated: 2018-04-10 Document version:netbackup

More information

Administration of High Availability Solutions using Veritas Cluster Server 5.0 for UNIX

Administration of High Availability Solutions using Veritas Cluster Server 5.0 for UNIX Administration of High Availability Solutions using Veritas Cluster Server 5.0 for UNIX Study Guide The following tables list the Symantec SCS Certification exam objectives for the Administration of High

More information

Veritas Storage Foundation for Windows by Symantec

Veritas Storage Foundation for Windows by Symantec Veritas Storage Foundation for Windows by Symantec Advanced online storage management Veritas Storage Foundation 5.0 for Windows brings advanced online storage management to Microsoft Windows Server environments.

More information

Veritas NetBackup for Microsoft Exchange Server Administrator s Guide

Veritas NetBackup for Microsoft Exchange Server Administrator s Guide Veritas NetBackup for Microsoft Exchange Server Administrator s Guide for Windows Release 8.1.1 Veritas NetBackup for Microsoft Exchange Server Administrator s Guide Last updated: 2018-02-16 Document version:netbackup

More information

Symantec ApplicationHA 6.1 Release Notes - Solaris on Oracle VM Server for SPARC

Symantec ApplicationHA 6.1 Release Notes - Solaris on Oracle VM Server for SPARC Symantec ApplicationHA 6.1 Release Notes - Solaris on Oracle VM Server for SPARC December 2013 Symantec ApplicationHA 6.1 Release Notes The software described in this book is furnished under a license

More information

Veritas NetBackup for Oracle Administrator's Guide

Veritas NetBackup for Oracle Administrator's Guide Veritas NetBackup for Oracle Administrator's Guide UNIX, Windows, and Linux Release 8.1 Veritas NetBackup for Oracle Administrator's Guide Last updated: 2017-09-26 Legal Notice Copyright 2017 Veritas Technologies

More information

VERITAS Cluster Server 4.0

VERITAS Cluster Server 4.0 VERITAS Cluster Server 4.0 Installation Guide Solaris N10051F January 2004 Disclaimer The information contained in this publication is subject to change without notice. VERITAS Software Corporation makes

More information

Storage Foundation for Oracle RAC with EMC SRDF

Storage Foundation for Oracle RAC with EMC SRDF Storage Foundation for Oracle RAC with EMC SRDF Disaster Recovery Solution April, 2007 Naveen Williams, Symantec Corporation 1 TABLE OF CONTENTS OVERVIEW...3 DISASTER RECOVERY...4 Storage Foundation for

More information

Veritas Storage Foundation and High Availability Solutions Virtualization Guide

Veritas Storage Foundation and High Availability Solutions Virtualization Guide Veritas Storage Foundation and High Availability Solutions Virtualization Guide Linux 6.0.1 July 2012 Veritas Storage Foundation and High Availability Solutions Virtualization Guide The software described

More information

Veritas Storage Foundation for Windows by Symantec

Veritas Storage Foundation for Windows by Symantec Veritas Storage Foundation for Windows by Symantec Advanced online storage management Data Sheet: Storage Management Overview Veritas Storage Foundation 6.0 for Windows brings advanced online storage management

More information

Veritas Storage Foundation and High Availability Solutions Getting Started Guide

Veritas Storage Foundation and High Availability Solutions Getting Started Guide Veritas Storage Foundation and High Availability Solutions Getting Started Guide Windows Server 2008 (x64), Windows Server 2008 R2 (x64) 6.0.1 21271162 (October 2012) Veritas Storage Foundation and High

More information

Veritas InfoScale 7.2 Virtualization Guide - Linux

Veritas InfoScale 7.2 Virtualization Guide - Linux Veritas InfoScale 7.2 Virtualization Guide - Linux March 2017 Veritas InfoScale 7.2 Virtualization Guide Last updated: 2017-03-14 Document version: 7.2 Rev 1 Legal Notice Copyright 2016 Veritas Technologies

More information

Veritas InfoScale Support Matrix for Oracle

Veritas InfoScale Support Matrix for Oracle Veritas InfoScale Support Matrix for Oracle Last updated: 2018-08-10 Legal ice Copyright 2018 Veritas Technologies LLC. All rights reserved. Veritas and the Veritas Logo are trademarks or registered trademarks

More information

Veritas InfoScale Virtualization Guide - Linux on ESXi

Veritas InfoScale Virtualization Guide - Linux on ESXi Veritas InfoScale 7.3.1 Virtualization Guide - Linux on ESXi Last updated: 2017-11-06 Legal Notice Copyright 2017 Veritas Technologies LLC. All rights reserved. Veritas and the Veritas Logo are trademarks

More information

Oracle Real Application Clusters Handbook

Oracle Real Application Clusters Handbook ORACLE Oracle Press Oracle Database 11 g Oracle Real Application Clusters Handbook Second Edition K Copalakrishnan Mc Gnaw Hill McGraw-Hill New York Chicago San Francisco Lisbon London Madrid Mexico City

More information

Oracle Fail Safe. Concepts and Administration Guide Release 4.1 for Microsoft Windows E

Oracle Fail Safe. Concepts and Administration Guide Release 4.1 for Microsoft Windows E Oracle Fail Safe Concepts and Administration Guide Release 4.1 for Microsoft Windows E24699-01 December 2012 Oracle Fail Safe Concepts and Administration Guide, Release 4.1 for Microsoft Windows E24699-01

More information

VERITAS Cluster Server Enterprise Agent for Sybase

VERITAS Cluster Server Enterprise Agent for Sybase VERITAS Cluster Server Enterprise Agent for Sybase Installation and Configuration Guide HP-UX January 2001 30-000061-399 Disclaimer The information contained in this publication is subject to change without

More information

Veritas Storage Foundation Cluster File System Administrator's Guide

Veritas Storage Foundation Cluster File System Administrator's Guide Veritas Storage Foundation Cluster File System Administrator's Guide HP-UX 11i v3 5.1 Service Pack 1 Veritas Storage Foundation Cluster File System Administrator's Guide The software described in this

More information

Installation Guide Release for Microsoft Windows

Installation Guide Release for Microsoft Windows [1]Oracle Fail Safe Installation Guide Release 4.1.1 for Microsoft Windows E57046-01 January 2015 Oracle Fail Safe Installation Guide, Release 4.1.1 for Microsoft Windows E57046-01 Copyright 1999, 2015,

More information

Symantec Patch Management Solution for Windows 8.5 powered by Altiris technology User Guide

Symantec Patch Management Solution for Windows 8.5 powered by Altiris technology User Guide Symantec Patch Management Solution for Windows 8.5 powered by Altiris technology User Guide Symantec Patch Management Solution for Windows 8.5 powered by Altiris technology User Guide Documentation version:

More information

Cluster Server Generic Application Agent Configuration Guide - AIX, Linux, Solaris

Cluster Server Generic Application Agent Configuration Guide - AIX, Linux, Solaris Cluster Server 7.3.1 Generic Application Agent Configuration Guide - AIX, Linux, Solaris Last updated: 2017-11-04 Legal Notice Copyright 2017 Veritas Technologies LLC. All rights reserved. Veritas and

More information

How To Make Databases on SUSE Linux Enterprise Server Highly Available Mike Friesenegger

How To Make Databases on SUSE Linux Enterprise Server Highly Available Mike Friesenegger How To Make Databases on SUSE Linux Enterprise Server Highly Available Mike Friesenegger SUSE Sales Engineer mikef@suse.com Agenda Clarify the term "Availability" What is High Availability Minimize a Database

More information

Veritas Storage Foundation and High Availability Solutions Microsoft Clustering Solutions Guide for Microsoft Exchange 2007

Veritas Storage Foundation and High Availability Solutions Microsoft Clustering Solutions Guide for Microsoft Exchange 2007 Veritas Storage Foundation and High Availability Solutions Microsoft Clustering Solutions Guide for Microsoft Exchange 2007 Windows Server 2008 (x64), Windows Server 2008 R2 (x64) 6.0 October 2011 Veritas

More information

Veritas Storage Foundation for Windows by Symantec

Veritas Storage Foundation for Windows by Symantec Veritas Storage Foundation for Windows by Symantec Advanced online storage management Veritas Storage Foundation 5.1 for Windows brings advanced online storage management to Microsoft Windows Server environments,

More information

VERITAS NetBackup 6.0 for Microsoft SharePoint Portal Server 2001

VERITAS NetBackup 6.0 for Microsoft SharePoint Portal Server 2001 VERITAS NetBackup 6.0 for Microsoft SharePoint Portal Server 2001 System Administrator s Guide for Windows N152708 September 2005 Disclaimer The information contained in this publication is subject to

More information

VERITAS Cluster Server Agent 3.0 for SAP R/3

VERITAS Cluster Server Agent 3.0 for SAP R/3 VERITAS Cluster Server Agent 3.0 for SAP R/3 Installation and Configuration Guide Solaris September 2002 Disclaimer The information contained in this publication is subject to change without notice. VERITAS

More information

VERITAS CLUSTER SERVER

VERITAS CLUSTER SERVER VERITAS CLUSTER SERVER COURSE DESCRIPTION 100% JOB GUARANTEE The Veritas Cluster Server course is designed for the IT professional tasked with installing, configuring, and maintaining VCS clusters. This

More information

VERITAS Volume Manager 3.1 for Windows 2000 Quick Start Guide

VERITAS Volume Manager 3.1 for Windows 2000 Quick Start Guide VERITAS Volume Manager 3.1 for Windows 2000 Quick Start Guide August 2002 N091603 Disclaimer The information contained in this publication is subject to change without notice. VERITAS Software Corporation

More information

VERITAS SANPoint Foundation Suite and SANPoint Foundation Suite HA 3.5

VERITAS SANPoint Foundation Suite and SANPoint Foundation Suite HA 3.5 VERITAS SANPoint Foundation Suite and SANPoint Foundation Suite HA 3.5 Release Notes HP-UX November 2002 30-000876-011 Disclaimer The information contained in this publication is subject to change without

More information

The VERITAS VERTEX Initiative. The Future of Data Protection

The VERITAS VERTEX Initiative. The Future of Data Protection The VERITAS VERTEX Initiative V E R I T A S W H I T E P A P E R The Future of Data Protection Table of Contents Introduction.................................................................................3

More information