IBM Tivoli Netcool Performance Manager Wireline Component Document Revision R2E1. Installation Guide

Size: px
Start display at page:

Download "IBM Tivoli Netcool Performance Manager Wireline Component Document Revision R2E1. Installation Guide"

Transcription

1 IBM Tivoli Netcool Performance Manager Wireline Component Document Revision R2E1 Installation Guide

2 Note Before using this information and the product it supports, read the information in Notices on page 213. Copyright IBM Corporation 2006, US Government Users Restricted Rights Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.

3 Contents About this information vii Audience vii Tivoli Netcool Performance Manager - Wireline Component vii The default UNIX shell ix Network and Service Assurance community on Service Management Connect ix Tivoli technical training ix Support information ix Conventions used in this publication x Typeface conventions x Chapter 1. Introduction Tivoli Netcool Performance Manager architecture.. 1 Co-location rules Inheritance Notable subcomponents and features Typical installation topology Basic topology scenario Intermediate topology scenario Advanced topology scenario Tivoli Netcool Performance Manager distribution.. 11 Chapter 2. Requirements Minimum requirements for installation Linux hardware requirements DB2 deployment space requirements Tivoli Integrated Portal deployment space requirements Screen resolution Minimum requirements for a proof of concept installation Linux hardware requirements (POC) Screen resolution Supported operating systems and modules Linux platforms Required user names pvuser db2 user Ancillary software requirements FTP support Open SSH and SFTP File compression DataView load balancing DB2 servers and IBM Data Server clients Tivoli Common Reporting client Java Runtime Environment (JRE) Web browsers and settings X Emulation OMNIbus Web GUI integration Microsoft Office Version Chapter 3. Installing and configuring the prerequisite software Overview Supported platforms Pre-Installation setup tasks Setting up a remote X Window display Changing the ethernet characteristics Adding the pvuser login name Enable FTP on Linux systems (Linux only) Disable SELinux (Linux only) Kernel parameters for DB2 database server installation (Linux) Deployer pre-requisites Operating system check Mount points check Authentication between distributed servers Downloading the Tivoli Netcool Performance Manager distribution to disk Downloading Tivoli Common Reporting to disk 40 General DB2 setup tasks Specifying a basename for DB_USER_ROOT Specifying DB2 login passwords Assumed values Installing DB2 Server server (64-bit) Download the IBM DB Fix Pack 1 distribution to disk Verifying the required operating system packages 43 Creating group and user ID for a DB2 server installation Installing DB2 Server server Setting up a DB2 instance Updating /etc/services file DB2 instance variable registry settings Starting the DB2 instance Installing IBM Data Server Client (64-bit).. 49 Downloading the IBM Data Server Client distribution to disk Creating group and user IDs for Data Server Client installation Installing IBM Data Server Client (64-bit) Next steps Chapter 4. Installing in a distributed environment Distributed installation process Starting the launchpad Installing the Topology Editor Starting the Topology Editor Creating a new topology Adding and configuring the Tivoli Netcool Performance Manager components Adding the hosts Adding a database configurations component.. 62 Adding a DataMart Adding a Discovery Server Adding a Tivoli Integrated Portal Adding a DataView Add the DataChannel administrative components 67 Copyright IBM Corp. 2006, 2013 iii

4 Adding a DataChannel Adding a collector Adding a Cross Collector CME Saving the topology Opening an existing topology file Starting the deployer Primary deployer Secondary deployer Pre-deployment check Deploying the topology Install a libcrypto.so Installing DataView with a non-root user on a local host and reusing Tivoli Integrated Portal.. 80 Installing DataView with a non-root user on a remote host and reusing Tivoli Integrated Portal. 82 Next steps Resuming a partially successful first-time installation Chapter 5. Installing as a minimal deployment Overview Before you begin Special consideration Overriding default values Installing a minimal deployment Starting the launchpad Starting the installation The post-installation script Next steps Downloading the MIB-II files Chapter 6. Modifying the current deployment Opening a deployed topology Adding a new component Changing configuration parameters of existing Tivoli Netcool Performance Manager components. 96 Moving components to a different host Moving a deployed collector to a different host.. 97 Moving a deployed SNMP collector Moving a deployed UBA bulk collector Changing the port for a collector Modifying Tivoli Integrated Portal and Tivoli Common Reporting ports Changing ports for the Tivoli Common Reporting console Port assignments Viewing the application server profile Chapter 7. Using the High Availability Manager Overview HAM basics The parts of a collector Clusters HAM cluster configuration Types of spare hosts Types of HAM clusters Example HAM clusters Resource pools How the SNMP collector works How failover works with the HAM and the SNMP collector Obtaining collector status Creating a HAM environment Topology prerequisites Procedures Create the HAM and a HAM cluster Add the designated spare Add the managed definitions Define the resource pools Save and start the HAM Creating an additional HAM environment Modifying a HAM environment Removing HAM components Stopping and restarting modified components 125 Viewing the current configuration Show Collector Process... dialog Show Managed Definition... dialog Chapter 8. Enabling Common Reporting on Tivoli Netcool Performance Manager Model Maker The Base Common Pack Suite Installing the BCP package from the distribution 130 Chapter 9. Uninstalling components 131 Removing a component from the topology Restrictions and behavior Removing a component Uninstalling the entire Tivoli Netcool Performance Manager system Order of uninstall Restrictions and behavior Performing the uninstall Uninstalling the Topology Editor Removing the residual files Appendix A. Remote installation issues When remote install is not possible FTP is possible, but REXEC or RSH are not Neither FTP nor REXEC/RSH are possible Installing on a remote host using a secondary Deployer Appendix B. DataChannel architecture 143 Data collection Data aggregation Management programs and watchdog scripts 144 DataChannel application programs Starting the DataLoad SNMP collector DataChannel management components in a distributed configuration Manually starting the Channel Manager programs Adding DataChannels to an existing system iv IBM Tivoli Netcool Performance Manager: Installation Guide

5 DataChannel terminology Appendix C. Aggregation sets Overview Configuring aggregation sets Installing aggregation sets Start the Tivoli Netcool Performance Manager setup program Set aggregation set installation parameters Edit aggregation set parameters file Linking DataView groups to timezones Appendix D. Deployer CLI options 163 Using the -DTarget option Appendix E. Secure file transfer installation Overview Enabling SFTP Installing OpenSSH Linux systems Configuring OpenSSH Configuring the OpenSSH server Configuring OpenSSH client Generating public and private keys Testing OpenSSH and SFTP Troubleshooting Tivoli Netcool Performance Manager SFTP errors 173 Appendix F. LDAP integration Supported LDAP servers LDAP configuration Enable LDAP configuration Verifying the DataView installation Assigning Tivoli Netcool Performance Manager roles to LDAP users Appendix G. Using silent mode Sample properties files The Deployer Running the Deployer in silent mode Confirming the status of a silent install Restrictions The Topology Editor Appendix H. Installing an interim fix 183 Overview Installation rules Behavior and restrictions Before you begin Installing a patch Appendix I. Error codes and log files 187 Error codes Deployer messages Topology Editor messages InstallAnywhere messages Log files COI log files Deployer log file Eclipse log file Trace log file Appendix J. Troubleshooting Deployment problems Saving installation configuration files Tivoli Netcool Performance Manager component problems Topology Editor problems Java problems Notices Contents v

6 vi IBM Tivoli Netcool Performance Manager: Installation Guide

7 About this information Audience IBM Tivoli Netcool Performance Manager is a bundled product consisting of a wireline component and a wireless component. Tivoli Netcool Performance Manager release is applicable for wireline only. The purpose of this information is to help you install the Tivoli Netcool Performance Manager product suite and the DB2 database management system. You can find instructions for installing Tivoli Netcool Performance Manager components, but not necessarily for configuring the installed components into a finished system that produces management reports. After going through the steps in this information, you will have a set of running Tivoli Netcool Performance Manager components ready to configure into a fully functional system. The goal of this guide is to get each component installed and running in its barest form. The running component does not necessarily have network statistical data flowing into and out of it yet. In particular, at the end of this installation procedure, there are no or few management reports that can be viewed in DataView. Configuring installed components into a working system is the subject of other manuals in the Tivoli Netcool Performance Manager documentation set. The audience for this information. The audience for this information is the network administrator or operations specialist responsible for installing the Tivoli Netcool Performance Manager product suite on an enterprise network. To install Tivoli Netcool Performance Manager successfully, you should have a thorough understanding of the following subjects: v Basic principles of TCP/IP networks and network management v SNMP concepts v Administration of the Linux, operating environment v Administration of the DB2 database management system v Tivoli Netcool Performance Manager Tivoli Netcool Performance Manager - Wireline Component IBM Tivoli Netcool Performance Manager consists of a wireline component (formerly Netcool/Proviso) and a wireless component (formerly Tivoli Netcool Performance Manager for Wireless). Tivoli Netcool Performance Manager - Wireline Component consists of the following subcomponents: v DataMart is a set of management, configuration, and troubleshooting GUIs. The Tivoli Netcool Performance Manager System Administrator uses the GUIs to define policies and configuration, and to verify and troubleshoot operations. Copyright IBM Corp. 2006, 2013 vii

8 v v v v DataLoad provides flexible, distributed data collection and data import of SNMP and non-snmp data to a centralized database. DataChannel aggregates the data collected through Tivoli Netcool Performance Manager DataLoad for use by the Tivoli Netcool Performance Manager DataView reporting functions. It also processes online calculations and detects real-time threshold violations. DataView is a reliable application server for on-demand, web-based network reports. Technology Packs extend the Tivoli Netcool Performance Manager system with service-ready reports for network operations, business development, and customer viewing. The following figure shows the different Tivoli Netcool Performance Manager modules. Tivoli Netcool Performance Manager documentation consists of the following: v Release notes v Configuration recommendations v User guides v Technical notes v Online help The documentation is available for viewing and downloading on the information center at com.ibm.tnpm.doc/welcome_tnpm.html. viii IBM Tivoli Netcool Performance Manager: Installation Guide

9 The default UNIX shell The installation scripts and procedures in this information generally presume, but do not require, the use of the Korn or Bash shells, and only Korn shell syntax is shown in examples. If you are a user of the C shell or Tcsh, make the necessary adjustments in the commands shown as examples throughout this information. This guide uses the following shell prompts in the examples: v # (pound sign) indicates commands you perform when logged in as root. v $ (dollar sign) indicates commands you perform when logged in as db2 or pvuser. v clpplus indicates the command line processor plus that provides a command-line user interface that you can use to connect databases and to define, edit, and run statements, scripts, and commands. Network and Service Assurance community on Service Management Connect Tivoli technical training Connect, learn, and share with Service Management professionals: product support technical experts who provide their perspectives and expertise. Access Service Management Connect at servicemanagement/nsa/index.html. Use Service Management Connect in the following ways: v v v v Support information Become involved with transparent development, an ongoing, open engagement between other users and IBM developers of Tivoli products. You can access early designs, sprint demonstrations, product roadmaps, and prerelease code. Connect one-on-one with the experts to collaborate and network about Tivoli and the Network and Service Assurance community. Read blogs to benefit from the expertise and experience of others. Use wikis and forums to collaborate with the broader user community. For Tivoli technical training information, see the following IBM Tivoli Education website at If you have a problem with your IBM software, you want to resolve it quickly. IBM provides the following ways for you to obtain the support you need: Online Access the IBM Software Support site at support/probsub.html. IBM Support Assistant The IBM Support Assistant is a free local software serviceability workbench that helps you resolve questions and problems with IBM software products. The Support Assistant provides quick access to support-related About this information ix

10 information and serviceability tools for problem determination. To install the Support Assistant software, go to support/isa. Troubleshooting Guide For more information about resolving problems, see the problem determination information for this product. Conventions used in this publication Several conventions are used in this publication for special terms, actions, commands, and paths that are dependent on your operating system Typeface conventions This publication uses the following typeface conventions: Bold Italic v v v v v Lowercase commands and mixed case commands that are otherwise difficult to distinguish from surrounding text Interface controls (check boxes, push buttons, radio buttons, spin buttons, fields, folders, icons, list boxes, items inside list boxes, multicolumn lists, containers, menu choices, menu names, tabs, property sheets), labels (such as Tip:, and Operating system considerations:) Keywords and parameters in text Citations (examples: titles of publications, diskettes, and CDs Words defined in text (example: a nonswitched line is called a point-to-point line) v Emphasis of words and letters (words as words example: "Use the word that to introduce a restrictive clause."; letters as letters example: "The LUN address must start with the letter L.") v New terms in text (except in a definition list): a view is a frame in a workspace that contains data. v Variables and values you must provide:... where myname represents... Monospace v Examples and code examples v File names, programming keywords, and other elements that are difficult to distinguish from surrounding text v Message text and prompts addressed to the user v Text that the user must type v Values for arguments or command options Bold monospace v Command names, and names of macros and utilities that you can type as commands v Environment variable names in text v Keywords v Parameter names in text: API structure parameters, command parameters and arguments, and configuration parameters v Process names v Registry variable names in text x IBM Tivoli Netcool Performance Manager: Installation Guide

11 v Script names About this information xi

12 xii IBM Tivoli Netcool Performance Manager: Installation Guide

13 Chapter 1. Introduction Introduction to Tivoli Netcool Performance Manager installation. An overview of the Tivoli Netcool Performance Manager product suite and provides important pre-installation setup information. Additionally, this topic provides an overview of the installation interface introduced in version Tivoli Netcool Performance Manager architecture Tivoli Netcool Performance Manager system components. The Tivoli Netcool Performance Manager components run on: v Linux servers Exact, release-specific requirements, prerequisites, and recommendations for hardware and software are described in detail in the IBM Tivoli Netcool Performance Manager: Configuration Recommendations Guide. You can work with Professional Services to plan and size the deployment of Tivoli Netcool Performance Manager components in your environment. The following diagram provides a high-level overview of the Tivoli Netcool Performance Manager architecture. The Tivoli Netcool Performance Manager system components are as follows: v Tivoli Netcool Performance Manager database - The Tivoli Netcool Performance Manager database is hosted on DB2. v Tivoli Netcool Performance Manager DataMart - Tivoli Netcool Performance Manager DataMart is the user and administrative interface to the Tivoli Netcool Performance Manager database and to other Tivoli Netcool Performance Manager components. Copyright IBM Corp. 2006,

14 v v v v v Tivoli Netcool Performance Manager DataLoad - Tivoli Netcool Performance Manager DataLoad consists of one or more components that collect network statistical raw data from network devices and from network management systems. Tivoli Netcool Performance Manager DataChannel - Tivoli Netcool Performance Manager DataChannel is a collection of components that collect data from DataLoad collectors, aggregate and process the data, and load the data into the Tivoli Netcool Performance Manager database. DataChannel components also serve as the escalation point for collected data that is determined to be over threshold limits. Tivoli Netcool Performance Manager DataView - Tivoli Netcool Performance Manager DataView is the Web server hosting and analysis platform. This platform is used to display Web-based management reports based on network data aggregated and placed in the Tivoli Netcool Performance Manager database. Tivoli Netcool Performance Manager Technology Packs - Each technology pack is a set of components that describes the format and structure of network statistical data generated by network devices. Each technology pack is specific for a particular device, or class of devices; or for a particular company's devices; or for a protocol (such as standard SNMP values) common to many devices. Tivoli Integrated Portal - The Tivoli Integrated Portal application provides a database-aware Web server foundation for the Web-based management reports displayed by Tivoli Netcool Performance Manager DataView. The Tivoli Integrated Portal application server is an essential component of each DataView installation. Co-location rules Allowed component deployment numbers and co-location rules. Table 1 lists how many of each component can be deployed per Tivoli Netcool Performance Manager system and whether multiple instances can be installed on the same server. In this table: v N - Depends on how many subchannels there are per channel, and how many channels there are per system. For example, if there are 40 subchannels per channel and 8 channels, theoretically N=320. However, the practical limit is probably much lower. v System - The entire Tivoli Netcool Performance Manager system. v Per host - A single physical host can be partitioned using zones, which effectively gives you multiple hosts. Note: All CME, DLDR, FTE, and LDR components within a channel must share the same filesystem. Table 1. Co-location rules Component AMGR Number of Instances Allowed One per host that supports DataChannel components Co-Location Constraints Co-Location Constraints Supported by Deployer? Yes 2 IBM Tivoli Netcool Performance Manager: Installation Guide

15 Table 1. Co-location rules (continued) Number of Instances Component Allowed BCOL v N per system v One per corresponding subchannel Co-Location Constraints CME One per subchannel Filesystem Yes CMGR One per system Yes Database One per system Yes Database channel One per DataChannel; maximum of 8 Yes DataLoad (SNMP collector) v v v N per system One per corresponding subchannel One per host DataMart v N per system v One per host DataView v N per system v One per host Discovery Server v N per system v One per host Co-locate with corresponding DataMart DLDR One per channel Filesystem Yes FTE One per subchannel Filesystem Yes HAM N+M per system, where N is the number of collectors that HAM is monitoring, and M is the number of standby collectors Yes LDR One per channel Filesystem Yes Log One per system Yes UBA (simple) v N per system Yes v One per corresponding subchannel Yes Yes Yes Co-Location Constraints Supported by Deployer? One per system. UBA (complex) Pack-dependent Pack-dependent Pack-dependent Yes v In the Logical view of the Topology Editor, the DataChannel component contains the subchannels, LDR, and DLDR components, with a maximum of 8 channels per system. The subchannel contains the collector, FTE, and CME, with a maximum of 40 subchannels per channel. Chapter 1. Introduction 3

16 Inheritance Inheritance is the method by which a parent object propagates its property values to a child component. The following rules should be kept in mind when dealing with these properties. v A Child Property can be read only, but is not always. v If the Child Property is not read only, then it can be changed to a value different from the Parent Property. v If the Parent Property changes, and the Child and Parent properties were the same before the change, then the child property will be changed to reflect the new Parent Property value v If the Child Property changes, the Parent Property value will not be updated v The Default Value of the Child Property is always the current Parent Property value Note: When performing an installation that uses non-default values, that is, non-default usernames, passwords and locations, it is recommended that you check both the Logical view and Physical view to ensure that they both contain the correct values before proceeding with the installation. Example As an example of how a new component inherits property values: The Disk Usage Server (DUS) is a child component of the Host object. The DUS Remote User property inherits its value from the Host PV User Property on creation of the DUS. The DUS property value will be taken from the Host property value. Child properties that have been inherited are marked as inherited. As an example of what happens when you change inherited property values: If we change the Host PV User Property value, it gets pushed down to the DUS Remote User property value, updating it. The associated Default Value is also updated. If we change the DUS Remote User property value, that is the child value, it does not propagate up to the host; the parent Host PV User Property value remains unchanged. Now the child and parent properties are out of sync, and if we change the parent property value it is not reflected in the child property, though the default value continues to be updated. 4 IBM Tivoli Netcool Performance Manager: Installation Guide

17 Notable subcomponents and features The following sections describe a subset of the Tivoli Netcool Performance Manager that should be considered before deciding on your topology configuration. Collectors Collectors description. The DataLoad collector takes in the unrefined network data and stores it in a file that Tivoli Netcool Performance Manager can read. This file is known as a binary object format file (BOF). The following processes are employed in the DataLoad module: v SNMP Collector - The DataLoad SNMP Collector sends SNMP requests to network objects. Only the data requested by the configuration that was defined for those network objects is retrieved. v Bulk Collector - The Bulk Collector uses a Bulk Adaptor, which is individually written for specific network resources, to format the unrefined data into a file, called a PVline file, which is passed to the Bulk Collector. Installation or topology considerations: Installation and topology considerations for collectors. The DataLoad modules can be loaded on lightweight servers and placed as close to the network as possible (often inside the network firewall). Because a DataLoad module does not contain a database, the hardware can be relatively inexpensive and can still reliably handle high volumes of data. Up to 320 DataLoad modules can be supported per Tivoli Netcool Performance Manager installation. The number of collectors in your system will affect the topology configuration. You can have multiple BULK collectors, UBA or BCOL, on a single host, but you can only have one SNMP based collector per host. The number of collectors is in turn driven by the number of required technology packs. Technology packs Technology packs description. Tivoli Netcool Performance Manager technology packs are custom designed collections of MIBs, discovery formulas, collection formulas, complex formulas, grouping rules, reporters, and other functions. Technology packs provide all Tivoli Netcool Performance Manager needs to gather data for targeted devices. Technology packs make it possible for Tivoli Netcool Performance Manager to report on technology from multiple vendors. Installation or topology considerations: Installation and topology considerations for technology packs. If you are creating a UBA collector, you must associate it with a specific technology pack. Chapter 1. Introduction 5

18 Note: General installation information for technology packs can be found in the IBM Tivoli Netcool Performance Manager: Pack Installation and Configuration Guide, pack-specific installation guides are also provided. Consult both sets of documentation for important installation or topology information. High Availability High Availability description. High availability is implemented for Tivoli Netcool Performance Manager. Tivoli Netcool Performance Manager consists of: High Availability Manager (HAM) A DataChannel component that can be configured to handle availability of SNMP collectors. For information on High Availability for DB2 database, see High Availability and Disaster Recovery Options for DB2 for Linux, UNIX, and Windows from the following location: Installation or topology considerations: Installation and topology considerations for the High Availability Manager. The High Availability Manager must be put on the same machine as the channel manager. Disk Usage Server This Disk Usage Server component is responsible for maintaining the properties necessary for quota management (flow control) of DataChannel. The DataChannel component requires a Disk Usage Server. This component is responsible for maintaining the properties necessary for quota management (flow control) of DataChannel. DataChannel components can be added to hosts that include a Disk Usage Server. Multiple Disk Usage Servers can be configured per host; therefore, allowing multiple DataChannel directories to exist on a single host. There are two major reasons why a user might want to configure multiple Disk Usage Servers: Disk space is running low Disk space might be impacted by the addition of a new DataChannel component. In which case, the user might want to add a file system that is managed by a new Disk Usage Server Separate disk quota management The user might want to separately manage the quotas that are assigned to discrete DataChannel components. For more information, see Disk quota management on page 7. The user can assign the management of a new file system to a Disk Usage Server by editing the local_root_directory property of that Disk Usage Server by using the Topology Editor. The user can then add DataChannel components to the host, and can assign the component to a Disk Usage Server, either in the creation wizard or by editing the DUS_NUMBER property inside the component. 6 IBM Tivoli Netcool Performance Manager: Installation Guide

19 Disk quota management: Disk Quota Management description. The addition of a Disk Usage Server endeavors to make the process of assigning space to a component much easier than it has been previously. No longer is a user that is required to calculate the requirements of each component and assign that space individually, but components now work together to more effectively use the space they have under the Disk Usage Server. Also, the user is relieved of trying to figure out which component needs extra space and then changing the quota for that component. Now, the user can change the quota of the Disk Usage Server and all components on that Disk Usage Server gets the update and share the space on an as needed basis. Good judgment of space requirements is still needed. However, the estimating of space requirements is being made at a higher level; and if an estimate is incorrect, only one number must be changed instead of potentially updating the quota for each component separately. Flow control: Typical installation topology Optimized flow control further eliminates problems with component level quotas. Each component holds on to only a five hours of input and output, and after it has reached this limit, it stops processing until the downstream component picks up some of the data. This avoids the cascading scenario where one component stops processing and the components feeding it begin to stockpile files, which results in the quota that is being filled and causes all components to shut down because they have run out of file space. Installation or topology considerations: Installation or Topology considerations for flow control. DataChannel components can only be added to hosts that include a Disk Usage Server. Example topology scenarios. Table 2 provides an example of where to install Tivoli Netcool Performance Manager components, using four servers. Use this example as a guide to help you determine where to install the Tivoli Netcool Performance Manager components in your environment. Chapter 1. Introduction 7

20 Basic topology scenario A basic example topology. Table 2. Tivoli Netcool Performance Manager basic topology scenario Server Name Tivoli Netcool Performance Manager Components Hosted delphi v DB2 server v Tivoli Netcool Performance Manager Database v Tivoli Netcool Performance Manager DataMart v Tivoli Netcool Performance Manager Discovery Server corinth v DB2 client v Tivoli Netcool Performance Manager DataLoad, SNMP collector v Tivoli Netcool Performance Manager DataLoad, Bulk Load collector sparta v DB2 client v Tivoli Netcool Performance Manager DataChannel athens v DB2 client v Tivoli Integrated Portal v Tivoli Netcool Performance Manager DataView Notes Install the Topology Editor and primary deployer on this system. You can install Tivoli Netcool Performance Manager components remotely on this system. You can install Tivoli Netcool Performance Manager components remotely on this system You can install Tivoli Netcool Performance Manager components remotely on this system. Your configuration can use a pre-existing Tivoli Integrated Portal, or install and include a new instance. 8 IBM Tivoli Netcool Performance Manager: Installation Guide

21 Intermediate topology scenario An intermediate example topology scenario. Table 3. Tivoli Netcool Performance Manager intermediate topology scenario Server Name Tivoli Netcool Performance Manager Components Hosted delphi v DB2 server v Tivoli Netcool Performance Manager Database v Tivoli Netcool Performance Manager DataMart v Tivoli Netcool Performance Manager Discovery Server corinth v DB2 client v Tivoli Netcool Performance Manager DataLoad, SNMP collector v Tivoli Netcool Performance Manager DataLoad, Bulk Load collector sparta v DB2 client v Tivoli Netcool Performance Manager DataChannel thessaloniki v DB2 client v Tivoli Netcool Performance Manager DataChannel Also running the Channel Manager. v Tivoli Netcool Performance Manager DataLoad, SNMP collector v Tivoli Netcool Performance Manager DataLoad, Bulk Load collector v High Availability Manager athens v DB2 client v Tivoli Integrated Portal v Tivoli Netcool Performance Manager DataView Notes Install the Topology Editor and primary deployer on this system. You could install Tivoli Netcool Performance Manager components remotely on this system. You could install Tivoli Netcool Performance Manager components remotely on this system You could install Tivoli Netcool Performance Manager components remotely on this system. This server contains a duplicate set of collectors to allow for high availability. You could install Tivoli Netcool Performance Manager components remotely on this system. Your configuration can use a pre-existing Tivoli Integrated Portal, or install and include a new instance. Chapter 1. Introduction 9

22 This scenario has an added copy of both collectors on corinth to a second machine, thessaloniki, for the purposes of failover. High Availability Manager only manages SNMP collectors; therefore, the High Availability Manager in this scenario will manage availability of the DataLoad SNMP collector and not the Bulk Load collector. The High Availability Manager must be put on the same machine as the channel manager. Advanced topology scenario An advanced example topology scenario. Table 4. Tivoli Netcool Performance Manager advanced topology scenario Server Name Tivoli Netcool Performance Manager Components Hosted delphi v DB2 server v Tivoli Netcool Performance Manager Database v Tivoli Netcool Performance Manager DataMart v Tivoli Netcool Performance Manager Discovery Server corinth v DB2 client v Tivoli Netcool Performance Manager DataLoad, SNMP collector v Tivoli Netcool Performance Manager DataLoad, Bulk Load collector sparta v DB2 client v Tivoli Netcool Performance Manager DataChannel thessaloniki v DB2 client v Tivoli Netcool Performance Manager DataChannel Also running the Channel Manager. v Tivoli Netcool Performance Manager DataLoad, SNMP collector v Tivoli Netcool Performance Manager DataLoad, Bulk Load collector v High Availability Manager Notes Install the Topology Editor and primary deployer on this system. You can install Tivoli Netcool Performance Manager components remotely on this system. You can install Tivoli Netcool Performance Manager components remotely on this system You can install Tivoli Netcool Performance Manager components remotely on this system 10 IBM Tivoli Netcool Performance Manager: Installation Guide

23 Table 4. Tivoli Netcool Performance Manager advanced topology scenario (continued) Server Name Tivoli Netcool Performance Manager Components Hosted athens v DB2 client v Tivoli Integrated Portal v Tivoli Netcool Performance Manager DataView rhodes v DB2 client v Tivoli Integrated Portal v Tivoli Netcool Performance Manager DataView Notes You can install Tivoli Netcool Performance Manager components remotely on this system. Your configuration can use a pre-existing Tivoli Integrated Portal, or install and include a new instance. You can install Tivoli Netcool Performance Manager components remotely on this system. Your configuration can use a pre-existing Tivoli Integrated Portal, or install and include a new instance. Tivoli Netcool Performance Manager distribution How to get your hands on the product distribution. The Tivoli Netcool Performance Manager distribution is available as a DVD/CD and as an electronic image. The instructions in this guide assume that you are installing from an electronic image. If you install the product from an electronic image, be sure to keep a copy of the distribution image in a well-known directory, because you will need this image in the future to make any changes to the environment, including uninstalling Tivoli Netcool Performance Manager. The Tivoli Netcool Performance Manager distribution DVD/CD contains: v Tivoli Netcool Performance Manager v Model Maker v Tivoli Netcool Performance Manager TCR Time BCP DB2 version for Window English v Download the Wireline Common BCP from the latest BCP Suite from the following link: Chapter 1. Introduction 11

24 12 IBM Tivoli Netcool Performance Manager: Installation Guide

25 Chapter 2. Requirements These are the complete set of requirements for Tivoli Netcool Performance Manager Minimum requirements for installation The minimum required host specifications for a Tivoli Netcool Performance Manager deployment. Linux hardware requirements Tivoli Netcool Performance Manager has a minimal deployment space requirement on Linux Tivoli Netcool Performance Manager has the following minimum requirements for the Linux environment: v 3 x Intel Xeon 5500/5600 series processors (quad-core), 2.4 GHz or greater. v v Note: DataLoad requires virtualization (virtual machine). For more information, see DataLoad SNMP on multiple CPU servers. 16 GB memory 2 x 300 GB HDD If you are deploying in a distributed environment, each server/virtual Machine requires the following: v 1 x Intel Xeon 5500/5600 series processor (quad-core) 2.4 GHz v 4GBRAM v 300 GB disk space Other deployment configurations must be sized by IBM Professional Services. DataLoad SNMP on multiple CPU servers Server requirements for DataLoad SNMP Tivoli Netcool Performance Manager DataLoad SNMP Collector supports only single and dual CPU servers. Installing Tivoli Netcool Performance Manager DataLoad SNMP Collector on servers having more than two CPUs will cause performance problems. This is not a supported configuration, unless another virtual partitioning mechanism on RedHat Enterprise Linux isolates the collector on a virtual host that has no more than two CPUs. It is recommended that you install Tivoli Netcool Performance Manager DataLoad SNMP Collectors on different systems from Tivoli Netcool Performance Manager DataMart, Tivoli Netcool Performance Manager Database, Tivoli Netcool Performance Manager DataView, and other DataLoad SNMP or Bulk Collectors. For demonstrations, evaluations, and small amounts of data collection, you can install the previously listed components on the same system. Copyright IBM Corp. 2006,

26 These restrictions do not apply to a Tivoli Netcool Performance Manager DataLoad Bulk Collector. DB2 deployment space requirements Tivoli Netcool Performance Manager has a minimal deployment space requirement for the DB2 server. When you install DB2, the following host must confirm the previously stated hardware requirements for Linux. However, DB2 might experience problems if sufficient swap space is not provided. Before you install DB2 database server, ensure that the prerequisites are met, such as disk, memory, and paging space requirements. There are extra prerequisites that depend on your operating system v v The same amount of Swap as RAM must be present on the DB2 server host in a distributed Tivoli Netcool Performance Manager system. Twice as much Swap as RAM must be present on the DB2 server host for a Tivoli Netcool Performance Manager proof of concept installation. For more information about DB2 for Linux, UNIX, and Windows, see For more information about DB2 database product documentation by product version, see Physical database design A high-quality physical database design is an important factor in a successful database deployment. The choices and decisions that are made during physical database design have a long lasting effect and far reaching impact in terms of overall database health and day-to-day administrative overhead that is incurred in a data center. Understanding DB2 features and how it can be applied to meet the business needs is crucial to come up with a high-quality physical database design that can adapt to evolving business requirements over time. A high-quality physical database design must consider the following items: v Business service level agreement (SLA) v I/O bandwidth v Performance objectives such as response time and throughput v Recovery time v Maintenance window v Administrative overhead v Reliability, availability, and serviceability (RAS) v Data (lifecycle) management As your business requirements change, you must reassess your physical database design. This reassessment must include periodic revisions of the design. If necessary, make configuration and data layout changes to meet your business requirements. v Minimize I/O traffic. v v Balance design features that optimize query performance concurrently with transaction performance and maintenance operations. Improve the performance of administration tasks such as index creation or backup and recovery processing. 14 IBM Tivoli Netcool Performance Manager: Installation Guide

27 v v v Reduce the number of time database administrators spend in regular maintenance tasks. Minimize backup and recovery elapsed time. Reassess overall database design as business requirements change. Use the following design best practices for RAS: v Design your business infrastructure with a solid sizing and capacity planning for current and future needs as the business grows. v Identify and eliminate single point of failures (SPOF) in the business infrastructure. v Implement redundancy in your infrastructure such as networks and mirrored disks. v Implement high availability and disaster recovery solutions in all layers of business infrastructure such as, database, application, and middleware. v For DB2 databases: Use separate high performing disks for data, transaction logs, and archived logs. Use mirrored logs for redundancy. Create a backup and restore plan for backing up databases, table paces, and transactional logs. For more information about DB2 product configurations and best practices, see Disk and memory requirements Ensure that an appropriate amount of disk space is available for your DB2 environment, and allocate memory accordingly. Disk requirements The disk space that is required for your product depends on the type of installation you choose and the type of file system you have. The DB2 Setup wizard provides dynamic size estimates based on the components that are selected during a typical, compact, or custom installation. Remember to include disk space for required databases, software, and communication products. Ensure that the file system is not mounted with concurrent I/O (CIO) option. On Linux and UNIX operating systems, 2 GB of free space in the /tmp directory is recommended, and at least 512 MB of free space in the/var directory is required. Note: On Linux and UNIX operating systems, you must install your DB2 product in an empty directory. If the directory that you have specified as the installation path contains subdirectories or files, your DB2 installation might fail. Memory requirements Memory requirements are affected by the size and complexity of your database system, the extent of database activity, and the number of clients that access your system. At a minimum, a DB2 database system requires 256 MB of RAM. For a system that is running just a DB2 product and the DB2 GUI tools, a minimum of 512 MB of RAM is required. Chapter 2. Requirements 15

28 However, 1 GB of RAM is recommended for improved performance. These requirements do not include any additional memory requirements for other software that is running on your system. For IBM Data Server Client support, these memory requirements are for a base of five concurrent client connections. For every additional five client connection, an extra 16 MB of RAM is required. For DB2 server products, the self-tuning memory manager (STMM) simplifies the task of memory configuration by automatically setting values for several memory configuration parameters. When enabled, the memory tuner dynamically distributes available memory resources among several memory consumers that include sort, the package cache, the lock list, and buffer pools. Paging space requirements DB2 requires paging, also called swap to be enabled. This configuration is required to support various functions in DB2 that monitor or depend on knowledge of swap/paging space utilization. The actual amount of swap/paging space that is required varies across systems and is not solely based on memory utilization by application software. A reasonable minimum swap/paging space configuration for most systems is 25-50% of RAM. HP systems with many small databases or multiple databases that are tuned by STMM might require a paging space configuration of 1 x RAM or higher. These higher requirements are due to virtual memory pre-allocated per database / instance, and retained virtual memory in the case of STMM tuning multiple databases. Extra swap/paging space might be wanted to provision for unanticipated memory overcommitment on a system. For more information about Prerequisites for a DB2 database server installation (Linux and UNIX), see com.ibm.db2.luw.qb.server.doc/doc/c html Some DB2 configuration parameters settings When a DB2 database instance or a database is created, a corresponding configuration file is created with default parameter values. You can modify these parameter values to improve performance and other characteristics of the instance or database. It is recommended that the two parameters; instance_memory and database_memory are set to the default values in Tivoli Netcool Performance Manager The disk space and memory that is allocated by the database manager that is based on default values of the parameters might be sufficient to meet your needs. In some situations, however, you might not be able to achieve maximum performance by using these default values. Configuration files contain parameters that define values such as the resources allocated to the DB2 database products and to individual databases, and the diagnostic level. There are two types of configuration files: The database manager configuration file for each DB2 instance The database manager configuration file is created when a DB2 instance is created. The parameters that it contains affect system resources at the instance level, independent of any one database that is part of that instance. Values for many of these parameters can be changed from the system default values to improve performance or increase capacity, depending on your system's configuration. 16 IBM Tivoli Netcool Performance Manager: Installation Guide

29 Database manager configuration parameters are stored in a file named db2systm. This file is created when the instance of the database manager is created. In Linux and UNIX environments, this file can be found in the sqllib subdirectory for the instance of the database manager. The database configuration file for each individual database A database configuration file is created when a database is created, and is located where that database exists. There is one configuration file per database. Its parameters specify, among other things, the amount of resource to be allocated to that database. Values for many of the parameters can be changed to improve performance or increase capacity. Different changes might be required, depending on the type of activity in a specific database. All database configuration parameters are stored in a file named SQLDBCONF. These files cannot be directly edited, and can be changed or viewed via a supplied API or by a tool, which calls that API. instance_memory The instance_memory is the database manager instance memory configuration parameter. This parameter specifies the maximum amount of memory that can be allocated for a database partition if you are using DB2 database products with memory usage restrictions or if you set it to a specific value. In Tivoli Netcool Performance Manager 1.3.3, set this parameter to AUTOMATIC. This setting allows instance memory to grow as needed. database_memory The database_memory is configuration parameter specifies the size of the database memory set. The database memory size counts towards any instance memory limit in effect. The setting must be large enough to accommodate the following configurable memory pools: bufferpools, the database heap, the locklist, the utility heap, the package cache, the catalog cache, the shared sort heap, and an additional minimum overflow area of 5%. In Tivoli Netcool Performance Manager 1.3.3, set this parameter to AUTOMATIC. The initial database memory size is calculated based on the underlying configuration requirements. For more information about these parameters, see infocenter/db2luw/v10r1/topic/com.ibm.db2.luw.admin.config.doc/doc/ c html Tivoli Integrated Portal deployment space requirements Tivoli Netcool Performance Manager has a minimal deployment space requirement for Tivoli Integrated Portal When you install DataView, you also install Tivoli Common Reporting. When you perform a remote install a local /tmp folder is required on the deployer to contain the Tivoli Common Reporting bundle. The space requirements are as follows: Local Tivoli Integrated Portal install: v /<TIP_install_location> -2GB Remote Tivoli Integrated Portal install v Local /tmp GB v Remote /tmp GB Chapter 2. Requirements 17

30 v Remote /<TIP_install_location> -2 GB Note: If you deploy many technology packs (especially Alcatel-Lucent 5620 SAM, Alcatel 5620 NM, and Nortel CS2000, which either have multiple UBAs or require multiple DataChannel applications), you might require more hardware capacity than is specified in the minimal configuration. In these situations, before you move to a production environment, IBM strongly recommends that you have IBM Professional Services size your deployment so that they can recommend extra hardware, if necessary. Screen resolution Recommended screen resolution details. A screen resolution of 1024 x 768 pixels or higher is recommended when running the deployer. Minimum requirements for a proof of concept installation Tivoli Netcool Performance Manager has minimum required host specifications for Proof of Concept (POC) deployments. Note: The minimum requirements do not account for additional functionality such as WebGUI, Cognos and MDE, each have additional memory and CPU impacts. Linux hardware requirements (POC) The minimum system requirements for a proof of concept installation on Linux. v v v 2 x Intel Xeon 5500/5600 series processors (quad-core), 2.4 GHz or greater 8 GB memory 1 x 300 GB HDD To support: v SNMP data only v All Tivoli Netcool Performance Manager components that are deployed on a single server v Number of resources that are supported up to 20,000 v Three SNMP technology packs based on MIB-II, Cisco Device, and Cisco IPSLA v 15-minute polling v Number of DataView users that are limited to less than three Screen resolution Recommended screen resolution details. A screen resolution of 1024 x 768 pixels or higher is recommended when running the deployer. 18 IBM Tivoli Netcool Performance Manager: Installation Guide

31 Supported operating systems and modules The supported operating systems, modules, and third-party applications for IBM Tivoli Netcool Performance Manager, Version The following sections list the supported operating systems, modules, and third-party applications for IBM Tivoli Netcool Performance Manager, Version For more information, see the Release notes - IBM Tivoli Netcool Performance Manager, Version 1.3.3, which contains the version numbers for each Tivoli Netcool Performance Manager module in Version Linux platforms Supported Linux Systems Tivoli Netcool Performance Manager can be installed and operated in an environment utilizing VMware partitions. This section details Linux environment prerequisites for Tivoli Netcool Performance Manager. Operating system Supported operating system and kernel versions. Tivoli Netcool Performance Manager supports the following Linux systems: v Linux hosts running 64-bit Red Hat Enterprise Linux Version 5.9 with DB2 Version To check the version of your operating system, enter: # cat /etc/redhat-release This command must return the output similar to: Red Hat Enterprise Linux Server release 5.9 (Tikanga) To verify the processor type, run the following command: uname -p To verify the machine type, run the following command: uname -m To verify the hardware platform, run the following command: uname -i All results should contain the output: x86_64 Database Database requirements if you are using Linux. v IBM DB2 Enterprise Server Edition , Linux 64-bit only v IBM Data Server Client bit only Required packages Required packages if you are using Linux. The following list shows the package requirements for RHEL distributions: Chapter 2. Requirements 19

32 v v v libpam.so.0 (32-bit) is required for DB2 database servers to run 32-bit non-sql routines. libaio.so.1 is required for DB2 database servers that are using asynchronous I/O. libstdc++.so.6 is required for DB2 database servers and clients. Table 5. Package requirements for RHEL Package name libaio compat-libstdc++ Description Contains the asynchronous library that is required for DB2 database servers. Contains libstdc++.so.6 (not required for Linux on POWER) The following tables list the package requirements for Red Hat distributions for DB2 partitioned database servers. v v v The ksh93 Korn shell is required for RHEL5 systems. The pdksh Korn Shell package is required for all other DB2 database systems. A remote shell utility is required for partitioned database systems. DB2 database systems support the following remote shell utilities: rsh ssh By default, DB2 database systems use rsh when you run commands on remote DB2 nodes, for example, when you start a remote DB2 database partition. To use the DB2 database system default, the rsh-server package must be installed (see following table). More information about rsh and ssh is available in the DB2 Information Center. If you choose to use the rsh remote shell utility, inetd (or xinetd) must be installed and running as well. If you choose to use the ssh remote shell utility, you must set the DB2RSHCMD communication variable immediately after the DB2 installation is complete. If this registry variable is not set, rsh is used. The nfs-utils Network file system support package is required for partitioned database systems. All required packages must be installed and configured before you continue with the DB2 database system setup. For general Linux information, see your Linux distribution documentation. Table 6. Package requirements for Red Hat Directory Package name Description /System Environment/Shell pdksh or ksh93 Korn Shell. /Applications/Internet openssh This package contains a set of client programs, which allow users to run commands on a remote computer via a Secure Shell. This package is not required if you use the default configuration of DB2 database systems with rsh. 20 IBM Tivoli Netcool Performance Manager: Installation Guide

33 Table 6. Package requirements for Red Hat (continued) Directory Package name Description /System Environment/ Daemons /System Environment/ Daemons /System Environment/ Daemons openssh-server\ rsh-server nfs-utils This package contains a set of server programs, which allow users to run commands from a remote computer via a Secure Shell. This package is not required if you use the default configuration of DB2 database systems with rsh. This package contains a set of programs, which allow users to run commands on a remote computer. Required for partitioned database environments. This package is not required if you configure DB2 database systems to use ssh. Network File System support package. It allows access to local files from remote computers. Extra packages that are required: v binutils el5 (x86_64) v compat-libstdc (x86_64) v compat-libstdc (i386) v elfutils-libelf el5 (x86_64) v glibc (x86_64) v glibc (i686) - both architectures are required. v glibc-common (x86_64) v ksh (x86_64) v libaio (x86_64) v libaio (i386) v libgcc el5 (i386) v libgcc el5 (x86_64) v libstdc el5 (i386) v make el5 (x86_64) Note: These are minimum required versions. Also, for some architectures both of the i386 and x86_ 64 package versions must be verified. For example, both the i386 and the x86_64 architectures for glibc must be installed. v elfutils-libelf-devel el5.x86_64.rpm Requires the following interdependent packages: elfutils-libelf-devel elfutils-libelf-devel-static Note: Since these packages are interdependent, they must be installed together by using the following command: Chapter 2. Requirements 21

34 v v v rpm -ivh elfutils-libelf-devel el5.x86_64.rpm elfutils-libelf-devel-static el5 glibc-headers x86_64.rpm Requires the following packages: kernel-headers el5.x86_64.rpm glibc-devel x86_64.rpm glibc-devel i386.rpm gcc el5.x86_64.rpm Requires the following packages: libgomp el5.x86_64.rpm libstdc++-devel el5.x86_64.rpm gcc-c el5.x86_64.rpm libaio-devel x86_64.rpm libaio-devel i386.rpm sysstat el5.x86_64.rpm unixodbc x86_64.rpm unixodbc i386.rpm unixodbc-devel x86_64.rpm unixodbc-devel i386.rpm The following packages are required and checked for by the check_os.ini application: libxp i386 libxp x86_64 libxpm x86_64 libstdc++-devel x86_6 glibc-devel-2.5-i386 glibc-devel-2.5-x86_64 gcc-c x86_64 openmotif i386 openmotif x86_64 Run the db2prereqcheck command to check if your system meets the prerequisites for the installation of a specific version of DB2 for Linux, UNIX, and Windows. For example, run the following commands:./db2prereqcheck -v s DBT3533I The db2prereqcheck utility has confirmed that all installation prerequisites were met for DB2 database "server " "". Version: " " DBT3533I The db2prereqcheck utility has confirmed that all installation prerequisites were met for DB2 database "server " "with DB2 purescale feature ". Version: " " DataMart DataMart requirements if you are using Linux. Java Runtime Environment (JRE) 1.6 or higher (for the Database Information module). 22 IBM Tivoli Netcool Performance Manager: Installation Guide

35 Required user names DataLoad DataLoad requirements if you are using Linux. No special requirements. DataChannel DataChannel requirements if you are using Linux. No special requirements. There are two user names that must created when installing Tivoli Netcool Performance Manager. Two specific user names are required on any server hosting Tivoli Netcool Performance Manager components, they are: pvuser A dedicated Tivoli Netcool Performance Manager UNIX user. db2 A dedicated DB2 user. pvuser The pvuser user name. The Tivoli Netcool Performance Manager UNIX user, pvuser must be added to each server hosting a Tivoli Netcool Performance Manager component. The Tivoli Netcool Performance Manager UNIX user, which is referred to as pvuser throughout the documentation, can be named by using any string as required by your organizations naming standards. For more information on how to add the Tivoli Netcool Performance Manager UNIX user pvuser, see the Pre-Installation Setup Tasks > Adding the pvuser Login Name section of the IBM Tivoli Netcool Performance Manager: Installation Guide. db2 user The db2 user name. The DB2 user db2 is added to each server hosting a Tivoli Netcool Performance Manager component. This user is added when installing either DB2 client or server. The default username used is db2; however, this DB2 username can be named using any string as required by your organizations naming standards. Note: If you want to select a non-default DB2 username, you must use the same name across all instances of DB2 client and server throughout your Tivoli Netcool Performance Manager system. Chapter 2. Requirements 23

36 Ancillary software requirements Extra and third-party software requirements. The following sections outline the extra software packages required by Tivoli Netcool Performance Manager. FTP support Tivoli Netcool Performance Manager requires FTP support. Tivoli Netcool Performance Manager supports the following file transport protocols between Tivoli Netcool Performance Manager components and third-party equipment (for example, EMS): v Microsoft Internet Information Services (IIS) FTP server Open SSH and SFTP Tivoli Netcool Performance Manager requires OpenSSH and SFTP support. Tivoli Netcool Performance Manager supports encrypted Secure File Transfer (SFTP) and FTP to move data files from DataLoad to DataChannel, or from DataChannel Remote to the DataChannel Loader. Tivoli Netcool Performance Manager SFTP is compatible only with OpenSSH server and client version 3.1p1 and above. OpenSSH is freely downloadable and distributable ( OpenSSH is supported on Linux. If you use the SFTP capability, you must obtain, install, generate keys for, maintain, and support OpenSSH and any packages required by OpenSSH. See Tivoli Netcool Performance Manager Technical Note: DataChannel Secure File Transfer Installation for more information about installing and configuring OpenSSH. Linux requirements Linux requirements in order to use OpenSSH and SFTP. OpenSSH is required for VSFTP to work with Tivoli Netcool Performance Manager. OpenSSH is installed by default on any RHEL system. By default, FTP is not enabled on Linux systems. You must enable FTP on your Linux host to carry out the installation of Tivoli Netcool Performance Manager. To enable FTP on your Linux host, run the following command as root: /etc/init.d/vsftpd start File compression File compression support. Archives that are delivered as part of the IBM Tivoli Netcool Performance Manager distribution are created by using GNU Tar. This program must be used for the decompression of archives. 24 IBM Tivoli Netcool Performance Manager: Installation Guide

37 DataView load balancing Load balancing support. IBM Tivoli Netcool Performance Manager supports the use of an external load balancer to optimize the use of available DataView instances. The load balancer must support the following basic features: v Basic IP-based load balancing v Sticky sessions based on incoming IP v Up/down status based on checking for a listening port The following is the link to the CSS Basic Configuration Guide: css11500series/v7.20/configuration/basic/guide/bsccfggd.html DB2 servers and IBM Data Server clients A DB2 database system consists of a DB2 Server and IBM Data Server clients. For more information about DB2 servers, see data/db2/. Download the IBM DB2 Fix Pack 1for Linux/x86-64 (64 bit) from the following link: Information+Management&product=ibm/Information+Management/DB2 &release=10.1.*&platform=linux+64-bit,x86_64&function=fixid &fixids=*server*fp001&includesupersedes=0 DB2 server DB2 server support. IBM DB2 Enterprise Server Edition , Linux 64-bit only. Note: Tivoli Netcool Performance Manager must be installed and run as a stand-alone database. It must not be placed on a server that already has a database as the installation program. The co-hosting of Tivoli Netcool Performance Manager also affects performance in unknown ways. If a co-host is required, then you must contact the IBM Professional Services for support. DB2 client IBM Data Server Client bit only. Important: In a Tivoli Netcool Performance Manager system, you must install the IBM Data Server Client (64-bit) in a distributed environment only. It is not required in a stand-alone environment. In a distributed environment, install the DB2 client software on each server where you plan to install a Tivoli Netcool Performance Manager component, except for the system where you installed the DB2 server. Chapter 2. Requirements 25

38 Tivoli Common Reporting client Tivoli Common Reporting and Cognos client-side requirements. Tivoli Netcool Performance Manager supports the use of: v Tivoli Integrated Portal 2.1 and v Tivoli Integrated Portal 2.2 You must install Tivoli Integrated Portal 2.1 if your system hosts software that is incompatible with Tivoli Integrated Portal 2.2 To use Cognos, you must download and install a windows version of Tivoli Common Reporting 2.1 from the following link: There are two prerequisites that must be in place to use Tivoli Common Reporting or Cognos on a Microsoft Windows environment: v Framework Manager v IBM Data Server Client Java Runtime Environment (JRE) Required Java support. Java Runtime Environment (JRE) 1.6 (32-bit) is required for all servers that are hosting Tivoli Netcool Performance Manager components. The IBM JDK is not supplied and installed automatically with the DataMart, DataChannel, and DataLoad components. When you install those components on servers that are remote from the server that is hosting the primary deployer (Topology Editor and Deployer) or Tivoli Integrated Portal, then the required JRE, as stated above, need to be deployed to those servers separately. Web browsers and settings Supported browsers. The following browsers are required to support the Web client and provide access to DataView reports: Important: If you are using Tivoli Netcool Performance Manager with WebGUI, see OMNIbus Web GUI integration on page 28 for the browsers supported with both. Important: No other browser types are supported. Table 7. Windows Clients Windows Vista v Microsoft Internet Explorer 8.0, 9.0 v Mozilla Firefox 3.6 v Mozilla Firefox ESR 10 Windows XP v Microsoft Internet Explorer v Mozilla Firefox 3.6 v Mozilla Firefox ESR 10 Note: When using Windows Internet Explorer, IBM recommends that you have available at least 1GB of memory 26 IBM Tivoli Netcool Performance Manager: Installation Guide

39 For Red Hat Linux 6: v Mozilla Firefox 3.6 v Mozilla Firefox ESR 10 The following are required browser settings: v Enable JavaScript v Enable cookies For more information about the web browser support, see developerworks/community/blogs/cdd16df5-7bb8-4ef1-bcb9-cefb1dd40581/entry/ tnpm_1_3_2_upgrade_browser_support7?lang=en. Browser requirements for the Launchpad Web browser requirements for the Launchpad. The new Launchpad has been tested on the following browser: On Linux: v Firefox 3.6 For information about downloading and installing these browsers, see the following web sites: v Note: You must be a registered user to use this site. Screen resolution Recommended screen resolution details. A screen resolution of 1152 x 864 pixels or higher is recommended for the display of DataView reports. Some reports may experience rendering issues at lower resolutions. Report Studio - Cognos Report Studio support. Report Studio is only supported by Microsoft Internet Explorer. X Emulation Remote desktop support. For DataMart GUI access, Tivoli Netcool Performance Manager supports the following: v Native X Terminals v Exceed V 6.2 The following libraries are required in order for Exceed to work with Eclipse: v libgtk v libgib v libfreetype v libatk v libcairo Chapter 2. Requirements 27

40 v libxft v libpango v Real VNC server 4.0 OMNIbus Web GUI integration OMNIbus Web GUI version support. The IBM Tivoli Netcool/OMNIbus Web GUI Integration Guide for Wireline describes how to integrate IBM Tivoli Netcool/OMNIbus Web GUI with the wireline component of Tivoli Netcool Performance Manager. Tivoli Netcool Performance Manager has support for: v Tivoli Integrated Portal 2.2 and OMNIbus Web GUI FP1+FP2. The web browsers that are supported by Web GUI and Tivoli Netcool Performance Manager are listed in the following table. Table 8. Web clients browsers supported by Web GUI Browser Version Operating system Internet Explorer 8.0, 9.0 Windows 2003, Windows XP, Windows Vista, Windows 2008, and Windows 7 Mozilla Firefox v Mozilla Firefox 3.6 v Mozilla Firefox ESR 10 Windows 2003, Windows XP, Windows Vista, Windows 2008, and Windows 7 Red Hat Enterprise Linux (RHEL) 5.9 Note: When you use Windows Internet Explorer, IBM recommends that you have at least 1 GB of memory available. Microsoft Office Version Microsoft Office support. The Tivoli Netcool Performance Manager DataView Scheduled Report option generates files compatible with Microsoft Office Word 2002 or higher. 28 IBM Tivoli Netcool Performance Manager: Installation Guide

41 Chapter 3. Installing and configuring the prerequisite software Overview Installing and configuring the software required by Tivoli Netcool Performance Manager. This chapter describes how to install and configure the prerequisite software for Tivoli Netcool Performance Manager. Before you begin the Tivoli Netcool Performance Manager installation, you must install the prerequisite software that is listed in the IBM Tivoli Netcool Performance Manager: Configuration Recommendations Guide. The required software includes: IBM DB2 Enterprise Server Edition , Linux 64-bit You must install DB2 Server software on a system where you plan to install a Tivoli Netcool Performance Manager database component. For more information about installing DB2 Version Server, see com.ibm.db2.luw.qb.server.doc/doc/c html IBM Data Server Client bit You must install DB2 client software on each system where you plan to install a Tivoli Netcool Performance Manager component, except for the system where you installed the DB2 server. When you complete the steps here, the DB2 server and client is installed and running, with table spaces sized and ready to accept the installation of a Tivoli Netcool Performance Manager DataMart database. You can communicate with DB2 by using the clpplus command-line utility. OpenSSH You must install and configure OpenSSH before you install Tivoli Netcool Performance Manager. For details, see Appendix E, Secure file transfer installation, on page 167. Linux systems require the installation of VSFTP (Very Secure FTP). Web browser The launchpad requires a web browser. IBM recommends using Mozilla Firefox with the launchpad. For the complete list of supported browsers, see the IBM Tivoli Netcool Performance Manager: Configuration Recommendations Guide document. Java Java is used by DataMart, DataLoad, and the technology packs. You must ensure that you are using the IBM JRE and not the RHEL JRE. The IBM JRE is supplied with the Topology Editor or with Tivoli Integrated Portal. To ensure you are using the right JRE, you can either: v Set the JRE path to conform to that used by the Topology Editor, do this using the following commands (using the default location for the primary deployer): PATH=/opt/IBM/proviso/topologyEditor/jre/bin:$PATH export $PATH Copyright IBM Corp. 2006,

42 v For a remote server, that is one that does not host the primary deployer, you must download and install the required JRE, and set the correct JRE path. See the IBM Tivoli Netcool Performance Manager: Configuration Recommendations Guide document for JRE download details. Note: See the IBM Tivoli Netcool Performance Manager: Configuration Recommendations Guide document for the complete list of prerequisite software and their supported versions. Supported platforms The platforms supported by Tivoli Netcool Performance Manager with DB2 support. Refer to the following table for platform requirement information. Tivoli Netcool Performance Manager Component All Tivoli Netcool Performance Manager Components: v Database v DataView v DataChannel v DataLoad v DataMart v Required Operating System RHEL 5.9, 64-bit Pre-Installation setup tasks Before installing the prerequisite software, perform the tasks outlined in this section. Setting up a remote X Window display Setting Up a Remote X Window Display About this task For most installations, it does not matter whether you use a Telnet, rlogin, Xterm, or Terminal window to get to a shell prompt. Some installation steps must be performed from a window that supports the X Window server protocols. This means that the steps described in later chapters must be run from an Xterm window on a remote system or from a terminal window on the target system's graphical display. Note: See the IBM Tivoli Netcool Performance Manager: Configuration Recommendations Guide document for the list of supported X emulators. Specifying the DISPLAY environment variable If you use an X Window System shell window such as Xterm, you must set the DISPLAY environment variable to point to the IP address and screen number of the system you are using. 30 IBM Tivoli Netcool Performance Manager: Installation Guide

43 About this task Command sequences in this manual do not remind you at every stage to set this variable. If you use the su command to become different users, be especially vigilant to set DISPLAY before running X Window System-compliant programs. Procedure In general, set DISPLAY as follows: $ DISPLAY=Host_IP_Address:0.0 $ export DISPLAY To make sure the DISPLAY environment variable is set, use the echo command: $ echo $DISPLAY Disabling access control to the display If you encounter error messages when trying to run X Window System-based programs, you might need to temporarily disable X Window System access control so an installation step can proceed. About this task To disable access control: Procedure 1. Set the DISPLAY environment variable. 2. Enter the following command when logged in as root: # /usr/openwin/bin/xhost + Note: Disabling access control is what enables access to the current machine from X clients on other machines. Changing the ethernet characteristics Before installing Tivoli Netcool Performance Manager, you must force both the ethernet adapter and the port on the switch to 100 full duplex mode - autonegotiate settings are not enough. Linux systems Enabling 100 full duplex mode on Linux systems. About this task Use your primary network interface to enable 100 full duplex mode. To check if full duplex is enabled: Procedure 1. Enter the following command: # dmesg grep -i duplex This might result in output similar to the following: eth0: link up, 100Mbps, full-duplex, lpa 0x45E1 2. Confirm that the output contains the words: Chapter 3. Installing and configuring the prerequisite software 31

44 Full Duplex If this is not contained within the output, you must enable full duplex mode. The example output that results from the command that is executed in step 1: eth0: link up, 100 MBPS, full-duplex, lpa 0x45E1 Indicate that the primary network interface is eth0. The actions that are specified in the following process presume that your primary network interface is eth0. Enabling full duplex mode on Linux: To enable full duplex mode. Procedure 1. Open the file ifcfg-eth0, which is contained in: /etc/sysconfig/network-scripts/ 2. Add the ETHTOOL_OPTS setting by adding the following text: ETHTOOL_OPTS="speed 100 duplex full autoneg off" Note: The ETHTOOL_OPTS speed setting can be set to either 100 or 1000 depending on speed of connection available 100 Mbit/s or 1000 Mbit/s (1 Gbit/s). Adding the pvuser login name pvuser is the default name that is used within this document to describe the required Tivoli Netcool Performance Manager UNIX user. The required user can be given any name of your choosing. However, for the remainder of this document this user is referred to as pvuser. Decide in advance where to place the home directory of the pvuser login user name. Use a standard home directory that is mounted on /home or /export/home, as available. Note: Do not place the home directory in the same location as the Tivoli Netcool Performance Manager program files. That is, do not use /opt/proviso or any other directory in /opt for the home directory. Add the pvuser login name to every system on which you install a Tivoli Netcool Performance Manager component, including the system that is hosting the DB2 server. Adding pvuser to a Standalone Computer Use the steps in this section to add the pvuser login name to each standalone computer. About this task These steps add the login name only to the local system files on each computer (that is, to the local /etc/passwd and /etc/shadow files). If your network uses a network-wide database of login names such as Yellow Pages or Network Information Services (NIS or NIS+), see Adding pvuser on an NIS-managed network on page 34. To add pvuser: 32 IBM Tivoli Netcool Performance Manager: Installation Guide

45 Procedure 1. Log in as root. 2. Set and export the DISPLAY environment variable. (see Setting up a remote X Window display on page 30.) 3. If one does not already exist, create a group to which you can add pvuser. You can create a group with the name of your choice using the following command: groupadd <group> where: v <group> is the name of the new group, for example, staff. 4. At a shell prompt, run the following command: # useradd -g <group> -m-d <home_dir>/<username> -k /etc/skel -s /bin/ksh <username> Where: v <group> is the name of the group to which you want to add pvuser. v <home_dir> is the home directory for the new user, for example, /export/home/ can be used as the example home directory. v <username> is the name of the new user. This can be set to any string. Note: For the remainder of this document this user will be referred to as pvuser. 5. Set a password for pvuser: # passwd pvuser The system prompts you to specify a new password twice. The default pvuser password assumed by the Tivoli Netcool Performance Manager installer is pv. This can be set to a password conforming to your organization's standards. 6. Test logging in as pvuser, either by logging out and back in, or with the su command, such as: # su - pvuser Confirm that you are logged in as pvuser with the id command: $id These instructions create a pvuser login name with the following attributes: Attribute login name member of group home directory login shell copy skeleton setup files (.profile, and so on) from this directory Value pvuser staff /home/export/pvuser Korn shell (/bin/ksh) /etc/skel Note: The pvuser account must have write access to the /tmp directory. Multiple computer considerations If you are creating the pvuser login name on more than one computer in your network, avoid confusion by specifying the same user ID number for each pvuser login name on each computer. When you have created the first pvuser login name, log in as pvuser and run the id command. The system responds with the user name and user ID number (and the group name and group ID number). For example: Chapter 3. Installing and configuring the prerequisite software 33

46 $ id uid=1001(pvuser) gid=10(staff) When you create the pvuser login name on the next computer, add the -u option to the useradd command to specify the same user ID number: # useradd -g <group> -m -d <home_dir>/pvuser -k /etc/skel -s /bin/ksh -u 1001 pvuser Where: v <group> is the name of the group to which you want to add pvuser. v <home_dir> is the home directory for the new user, for example, /export/home/ can be used as the example home directory. v <username> is the name of the new user. This can be set to any string. Adding pvuser on an NIS-managed network Adding pvuser on an NIS-Managed Network. If your site's network uses NIS or NIS+ to manage a distributed set of login names, see your network administrator to determine whether pvuser should be added to each Tivoli Netcool Performance Manager computer's local setup files, or to the network login name database. Enable FTP on Linux systems (Linux only) By default, FTP is not enabled on Linux systems. About this task To enable FTP on your Linux host: Procedure 1. Log in as root: 2. Change to the following directory: # /etc/init.d 3. Run the following command: #./vsftpd start Disable SELinux (Linux only) Tivoli Netcool Performance Manager will not install properly if the SELinux security policy is set to "enforcing". About this task To change the SELinux security policy is set to "enforcing" you must: Procedure 1. Open the SELinux config file for editing: $ cat /etc/selinux/config 2. Change the line in the file. SELINUX=enforcing To: SELINUX=disabled 34 IBM Tivoli Netcool Performance Manager: Installation Guide

47 Note: You can also set the SELINUX setting to permissive. Setting SELINUX to permissive will result in a number of warnings at install time, but it will allow the installation code to run. Kernel parameters for DB2 database server installation (Linux) The configuration or modification of kernel parameters for DB2 database server installation depends on your operating system. Kernel parameter requirements (Linux) The database manager uses a formula to automatically adjust kernel parameter settings and eliminate the need for manual updates to these settings. Interprocess communication kernel parameters When instances are started, if an interprocess communication (IPC) kernel parameter is below the enforced minimum value, the database manager updates it to enforced minimum value. The IPC kernel parameter values changed, when a DB2 instance is started do not persist when the system is rebooted. After a reboot, kernel settings might be lower than the enforced values until a DB2 instance is started. By adjusting any kernel parameter settings, the database manager prevents unnecessary resource errors. For the latest information about supported Linux distributions, see Table 9. Enforced minimum settings for Linux interprocess communication kernel parameters IPC kernel parameter Enforced minimum setting kernel.shmmni (SHMMNI) 256*<size of RAM in GB> kernel.shmmax (SHMMAX) <size of RAM in bytes> 1 kernel.shmall (SHMALL) 2*<size of RAM in the default system page size> 2 kernel.sem (SEMMNI) 256*<size of RAM in GB> kernel.sem (SEMMSL) 250 kernel.sem (SEMMNS) kernel.sem (SEMOPM) 32 kernel.msgmni (MSGMNI) 1024*<size of RAM in GB> kernel.msgmax (MSGMAX) kernel.msgmnb (MSGMNB) On 32-bit Linux operating systems, the enforced minimum setting for SHMMAX is limited to bytes. 2. SHMALL limits the total amount of virtual shared memory that can be allocated on a system. Each DB2 data server efficiently manages the amount of system memory it consumes, also know as committed memory. The DB2 data server allocates more virtual memory than it commits to support memory preallocation and dynamic memory management. Memory preallocation benefits performance. Dynamic memory management is the process of growing and shrinking real memory usage within separate virtual shared memory areas. To support memory preallocation and dynamic memory management effectively data servers frequently must allocate more virtual shared memory on a system than the amount of physical RAM. The kernel requires this value as a number of pages. Chapter 3. Installing and configuring the prerequisite software 35

48 Table 9. Enforced minimum settings for Linux interprocess communication kernel parameters (continued) IPC kernel parameter Enforced minimum setting 3. Load performance might benefit from a larger message queue size limit, which is specified in bytes by MSGMNB. You can view message queue usage can by running the ipcs -q command. If the message queues are at capacity, or reaching capacity, during load operations, consider increasing the number of bytes the message queue size limit. Other recommended kernel parameter settings Other recommended kernel parameter settings are listed in the following table. Table 10. Configuring other Linux kernel parameters Recommended kernel parameter setting vm.swappiness=0 vm.overcommit_memory=0 Configuring the kernel parameters for DB2 data server This parameter defines how prone the kernel is to swapping application memory out of physical random access memory (RAM). The default setting is vm.swappiness=60. The recommended kernel parameter setting, vm.swappiness=0, configures the kernel to give preference to keeping application memory in RAM instead of assigning more memory for file caching. This setting avoids unnecessary paging and excessive use of swap space. This setting is especially important for data servers that are configured to use the self-tuning memory manager (STMM). This parameter influences how much virtual memory the kernel permits allocating. The default setting, vm.overcommit_memory=0, sets the kernel to disallow individual processes from making excessively large allocations, however the total allocated virtual memory is unlimited. Having unlimited virtual memory is important for DB2 data servers, which retain extra unused virtual memory allocations for dynamic memory management. Unreferenced allocated memory is not backed by RAM or paging space on Linux systems. Avoid setting vm.overcommit_memory=2, as this setting limits the total amount of virtual memory that can be allocated, which can result in unexpected errors. Modifying kernel parameters (Linux) The database manager uses a formula to automatically adjust kernel parameter settings and eliminate the need for manual updates to these settings. Before you begin You must have root authority to modify kernel parameters. 36 IBM Tivoli Netcool Performance Manager: Installation Guide

49 Procedure To update kernel parameters on Red Hat Linux, follow these steps: 1. Run the ipcs -l command to list the current kernel parameter settings. 2. Analyze the command output to determine whether you have to change kernel settings or not by comparing the current values with the enforced minimum settings at com.ibm.db2.luw.qb.server.doc/doc/c html. The following text is an example of the ipcs command output with comments added after // to show what the parameter names are: # ipcs -l Shared Memory Limits max number of segments = 4096 // SHMMNI max seg size (kbytes) = // SHMMAX max total shared memory (kbytes) = // SHMALL min seg size (bytes) = Semaphore Limits max number of arrays = 1024 // SEMMNI max semaphores per array = 250 // SEMMSL max semaphores system wide = // SEMMNS max ops per semop call = 32 // SEMOPM semaphore max value = Messages: Limits max queues system wide = 1024 // MSGMNI max size of message (bytes) = // MSGMAX default max size of queue (bytes) = // MSGMNB v v v Beginning with the first section on Shared Memory Limits, the SHMMAX limit is the maximum size of a shared memory segment on a Linux system. The SHMALL limit is the maximum allocation of shared memory pages on a system. It is recommended to set the SHMMAX value to be equal to the amount of physical memory on your system. However, the minimum required on x86 systems is (256 MB) and for 64-bit systems, it is (1 GB). The SHMALL parameter is set to 8 GB by default ( KB = 8 GB). If you have more physical memory than 8 GB, and it is to be used for DB2, then this parameter increases to approximately 90% of your computer's physical memory. For instance, if you have a computer system with 16 GB of memory to be used primarily for DB2, then SHMALL should be set to (90% of 16 GB is 14.4 GB; 14.4 GB is then divided by 4 KB, which is the base page size). The ipcs output converted SHMALL into kilobytes. The kernel requires this value as a number of pages. If you are upgrading to DB2 Version 10.1 and you are not using the default SHMALL setting, you must increase the SHMALL setting by an additional 4 GB. This increase in memory is required by the fast communication manager (FCM) for additional buffers or channels. The next section covers the amount of semaphores available to the operating system. The kernel parameter sem consists of four tokens, SEMMSL, SEMMNS, SEMOPM and SEMMNI. SEMMNS is the result of SEMMSL multiplied by SEMMNI. The database manager requires that the number of arrays (SEMMNI) be increased as necessary. Typically, SEMMNI should be twice the maximum number of agents expected on the system multiplied by the number of logical partitions on the database server computer plus the number of local application connections on the database server computer. The third section covers messages on the system. Chapter 3. Installing and configuring the prerequisite software 37

50 The MSGMNI parameter affects the number of agents that can be started; the MSGMAX parameter affects the size of the message that can be sent in a queue, and the MSGMNB parameter affects the size of the queue. The MSGMAX parameter should be changed to 64 KB (that is, bytes), and the MSGMNB parameter should be increased to Modify the kernel parameters that you have to adjust by editing the /etc/sysctl.conf file. If this file does not exist, create it. The following lines are examples of what should be placed into the file: #Example for a computer with 16GB of RAM: kernel.shmmni=4096 kernel.shmmax= kernel.shmall= #kernel.sem=<semmsl> <SEMMNS> <SEMOPM> <SEMMNI> kernel.sem= kernel.msgmni=16384 kernel.msgmax=65536 kernel.msgmnb= Run sysctl with -p parameter to load in sysctl settings from the default file /etc/sysctl.conf: sysctl -p Deployer pre-requisites 5. Optional: Have the changes persist after every reboot: v (Red Hat) The rc.sysinit initialization script reads the /etc/sysctl.conf file automatically. For the latest information about supported Linux distributions, see Minimum filesystem specification and pre-requisites for the Deployer script. The Deployer will check the for the items described under the following headings. You should ensure that all elements are installed before running the Deployer. Operating system check The Deployer will fail if the required patches listed in this file are not installed. The Deployer performs a check on the operating system versions and that the minimum required packages are installed. For more information on the complete set of requirements for installation on Linux, consult the IBM Tivoli Netcool Performance Manager: Configuration Recommendations Guide. Mount points check The Deployer assesses the available filesystem space for the defined mount point locations. The space requirements are calculated based on: The defined topology The more components added to a single server the more space is required on that server. The component install location Any directory set as the install location for a component will require 38 IBM Tivoli Netcool Performance Manager: Installation Guide

51 sufficient space to store that component. The default install directory is /opt. You do not have to use the default. This can be set to any directory location that has sufficient space. Remote installation of components If components are being installed remotely, sufficient space must be assigned in the /tmp directory to store the software before it can be transferred to the remote servers. For a statement of minimum space requirements per server in a distributed install or for a single server in a proof of concept install, see the IBM Tivoli Netcool Performance Manager: Configuration Recommendations Guide. Authentication between distributed servers Why you must authenticate between distributed servers. If you are performing an installation that has a topology covering a set of distributed servers, ensure that RSA keys have been cached between servers for root and pussier prior to installation. If there are new servers that form part of installation topology which have not been authenticated, the installation will fail. Note: pvuser is the required Tivoli Netcool Performance Manager UNIX user. For more information about adding this user to your system, see Adding the pvuser login name on page 32. Downloading the Tivoli Netcool Performance Manager distribution to disk To download the Tivoli Netcool Performance Manager distribution to a directory on a target server's hard disk: About this task Whether you are installing the product from an electronic image or from DVD/CD, you must copy the distribution to a writeable location on the local filesystem before beginning the installation. To download the Tivoli Netcool Performance Manager distribution to a directory on the host from which you intend to run the Topology Editor: Procedure 1. On the target host, log in as the Tivoli Netcool Performance Manager user, such as pvuser. 2. Create a directory to hold the contents of your Tivoli Netcool Performance Manager distribution. For example: $ mkdir /var/tmp/cdproviso Note: Any further references to this directory within the install will be made using the token <DIST_DIR>. You will run a variety of scripts and programs from directories residing in the directory created on the hard drive, including: v Pre-installation script v Installation script v Tivoli Netcool Performance Manager setup program Chapter 3. Installing and configuring the prerequisite software 39

52 3. Download the Tivoli Netcool Performance Manager distribution to the host directory created in the previous step and expand the contents of the distribution package. Downloading Tivoli Common Reporting to disk To download the Tivoli Common Reporting distribution to a directory on a target server's hard disk. About this task General DB2 setup tasks Extract the Tivoli Common Reporting driver so it can be used by the Tivoli Netcool Performance Manager Common Installer. The following process ensures that the user is required to specify the Tivoli Common Reporting media location only once: Procedure 1. Create a folder named TCR as a peer to the other Tivoli Netcool Performance Manager Components, that is, DataView, DataChannel, and so on. For example: <DIST_DIR>/proviso/RHEL/TCR 2. Extract the Tivoli Common Reporting 2.1 inside this folder. If you decide not to extract the compressed (TAR) file as a peer to the other components, a TCR folder must still be created having a path to the Tivoli Common Reporting install.sh the same as: /TCR/TCRInstaller/install.sh Note: If the user extracts the compressed (TAR) file directly into the same root location as the Tivoli Netcool Performance Manager components, then the Tivoli Common Reporting launchpad.sh overwrites the Tivoli Netcool Performance Manager Installer launchpad.sh, meaning the launchpad cannot be started for the installer. How to install DB2 for use with Tivoli Netcool Performance Manager. To install DB2, you require the following: v An appropriately sized server with the operating system installed and running (for the DB2 server). v v Note: For a basic overview of the minimum processor speed, memory size, and disk configuration requirements for your Tivoli Netcool Performance Manager installation, see the IBM Tivoli Netcool Performance Manager: Configuration Recommendations Guide. For more information, you can contact the IBM Professional Services. The current version of Tivoli Netcool Performance Manager software. The downloaded files for the DB2 installation. Before you install DB2, read the setup and password information. Note: Tivoli Netcool Performance Manager must be installed and run as a stand-alone database. It must not be placed on a server that already has a database as the installation program. The co-hosting of Tivoli Netcool Performance Manager also affects performance in unknown ways. If a co-host is required then, you must contact the IBM Professional Services for support. 40 IBM Tivoli Netcool Performance Manager: Installation Guide

53 Specifying a basename for DB_USER_ROOT Tivoli Netcool Performance Manager components use distinct DB2 login names so that database access can be controlled separately by component, and for database troubleshooting clarity. About this task The Tivoli Netcool Performance Manager installation generates the appropriate login names for each Tivoli Netcool Performance Manager subsystem. Procedure Provide a basename, which the installation retains as the variable DB_USER_ROOT. This is not an operating system environment variable, but a variable that is used internally by the installer. Note: The default DB_USER_ROOT value is PV. If you want to assign any other value, then ensure that the value is not more than two characters long. IBM recommends you to retain the default value that is PV. Results DB2 login names are generated from the DB_USER_ROOT base name by appending a function or subsystem identifier to the base name, as in the following examples: v PV_ADMIN v PV_INSTALL v PV_LDR v PV_CHANNEL v PV_COLL v PV_CHNL_MANAGER v PV_GUI In addition, separate DB2 login names are generated for each Tivoli Netcool Performance Manager DataChannel and subsystem, identified by an appended channel number, as in the following examples: v PV_CHANNEL_01 v PV_CHANNEL_02 v PV_LDR_01 v PV_LDR_02 Specifying DB2 login passwords For each component that requires a DB2 login name, you must provide a password for that login name. About this task In every case, the installer uses the default DB2 password, PV. DB2 passwords are case-sensitive, PV, and pv are not the same. The default password is shown in uppercase, but is sometimes shown in lowercase. In both cases, the same default uppercase (PV) password is intended. Chapter 3. Installing and configuring the prerequisite software 41

54 Procedure You can retain the default password, or enter passwords of your own according to your site password standards. You must use the same password for all Tivoli Netcool Performance Manager subsystem DB2 login names. If you use different passwords for each login name, keep a record of the passwords you assign to each login name. Results The Tivoli Netcool Performance Manager installer uses PV for three default values, as described in the table. Table 11. Uses of PV as Default Values Installer Default Value Used As Recommendation PV Default value of the DB_USER_ROOT variable, the basename on which DB2 login names are generated PV or pv PV Default password for all DB2 login names Default DB2 database name. In all instances, use the default value PV, unless your site has an explicit naming standard or an explicit password policy. What to do next Note: If you use a non-default value, you must remember to use the same value in all installation stages. Assumed values The steps in this chapter assume the following default values: Setting Hostname of the DB2 server DB2 server program files installed in DB2_BASE Operating system login name for DB2 user Password for DB2 user DB_NAME DB2 installed in (DATABASE_HOME) DB_USER_ROOT Path for DB2 data, mount point 1 delphi /opt/db2 /opt/db2 Assumed Value db2 Note: The default name created is db2. However, you can set another name for the DB2 user. db2 PV /opt/db2/product/ Note: The value of DATABASE_HOME cannot contain soft links to other directories or filesystem. Be sure to specify the entire absolute path to DB2.Tivoli Netcool Performance Manager expects an Optimal Flexible Architecture (OFA) structure where DATABASE_HOME is a sub-directory to DB2_BASE. PV /raid_2/db2data 42 IBM Tivoli Netcool Performance Manager: Installation Guide

55 Setting Path for DB2 data, mount point 2 /raid_3/db2data Assumed Value Note: If your site has established naming or password conventions, you can substitute site-specific values for these settings. However, IBM strongly recommends using the default values the first time you install Tivoli Netcool Performance Manager. See Specifying a basename for DB_USER_ROOT on page 41 for more information. Installing DB2 Server server (64-bit) The DB2 Server (64-bit) version provides both 64-bit and 32-bit libraries. Therefore, you need not install the DB2 client on every host of the Tivoli Netcool Performance Manager where you installed the DB2 server. About this task Instructions on how to install the DB2 Server (64-bit). Download the IBM DB Fix Pack 1 distribution to disk The DB2 installation files must be in place before you can begin the installation of DB2. Procedure 1. Log in as root. 2. Set the DISPLAY environment variable. 3. Create a directory to hold the contents of the DB2 distribution. For example: # mkdir /var/tmp/db2setup Download the DB2 files to the /var/tmp/db2setup1010 directory. 5. Extract the DB2 distribution files that now exist in the /var/tmp/db2setup1010 directory. What to do next See your database administrator to determine whether there are any company-specific requirements for installing DB2 in your environment. The directory created and to which the DB2 distribution is downloaded from now on is referred as <DB2_DIST_DIR>. Verifying the required operating system packages Before you install the DB2 server, make sure that all required packages are installed on your system. Procedure 1. Make sure all the required Linux packages are installed on your system. All packages and patches are specified in the IBM Tivoli Netcool Performance Manager: Configuration Recommendations Guide. 2. If these packages are not on your system, see the relevant operating system Installation Guide for instructions on installing supplementary package software. Chapter 3. Installing and configuring the prerequisite software 43

56 Creating group and user ID for a DB2 server installation You can use the DB2 Setup wizard to create the users and groups during the installation process. If you want, you can create them ahead of time. Before you begin To perform this task, you must have root user authority to create users and groups. About this task Two users and groups are required. The user and group names that are used in the following instructions are documented in the following table. You can specify your own user and group names if they adhere to system naming rules and DB2 naming rules. The user IDs you create are required to complete subsequent setup tasks. Table 12. Default users and groups User Description Example user name Example group name Instance owner The DB2 instance is created in the instance owner home directory. This user ID controls all DB2 processes and owns all file systems and devices that are used by the databases that are contained within the instance. The default user is db2 and the default group is db2iadm. db2 db2iadm Fenced user The fenced user is used to run user-defined functions (UDFs) and stored procedures outside of the address space that is used by the DB2 database. The default user is db2fenc and the default group is db2fadm. If you do not need this level of security, for example in a test environment, you can use your instance owner as your fenced user. db2fenc db2fadm User ID restrictions 44 IBM Tivoli Netcool Performance Manager: Installation Guide

57 User IDs have the following restrictions and requirements: v Must have a primary group other than guests, admins, users, and local. v Can include lowercase letters (a-z), numbers (0-9), and the underscore character (_). v Cannot be longer than eight characters v Cannot begin with IBM, SYS, SQL, or a number. v Cannot be a DB2 reserved word (USERS, ADMINS, GUESTS, PUBLIC, or LOCAL), or an SQL reserved word. v Cannot use any user IDs with root privilege for the DB2 instance ID, DAS ID, or fenced ID. v Cannot include accented characters. Procedure To create the required groups and user IDs for DB2 database systems, follow these steps: 1. Log in as root user. 2. To create groups on Linux operating systems, enter the following commands: Note: These command line examples do not contain passwords. You can use the passwd username command from the command line to set the password. groupadd db2iadm groupadd db2fadm 3. Create users for each group by using the following commands: useradd -g db2iadm -m -d /opt/db2 db2 useradd -g db2fadm -m -d /home/db2fenc db2fenc 4. Set initial password by using the following commands: passwd db2 Changing password for user db2. New UNIX password: db2 BAD PASSWORD: it is WAY too short Retype new UNIX password: db2 passwd: all authentication tokens updated successfully. passwd db2fenc Changing password for user db2fenc. New UNIX password: db2fenc BAD PASSWORD: it is based on a dictionary Retype new UNIX password: db2fenc passwd: all authentication tokens updated successfully. 5. Relax the permission on home directory of db2 user as you want to install DB2 inside it by using the following command: chmod 707 /opt/db2 Chapter 3. Installing and configuring the prerequisite software 45

58 Installing DB2 Server server To start the DB2 Setup wizard, use these steps. Procedure 1. Change to the directory where the DB2 database product distribution is copied by entering the following command: cd <DB2_DIST_DIR> 2. If you have downloaded the DB2 database product image, extract the product file by using the following commands: gunzip <product>.tar.gz tar -xvf product.tar Where product is the name of the product that you downloaded. 3. Change directory to <DB2_DIST_DIR>/server. 4. Start DB2 installation by entering the following command:./db2_install Important: The db2_install command is deprecated and might be removed in a future release. In this release it works as expected. You see the following output: 46 IBM Tivoli Netcool Performance Manager: Installation Guide

59 ./db2_install Default directory for installation of products - /opt/ibm/db2/v10.1 *********************************************************** Install into default directory (/opt/ibm/db2/v10.1)? [yes/no] no Enter the full path of the base installation directory: /opt/db2/product/ Specify one of the following keywords to install DB2 products. AESE ESE CONSV WSE EXP CLIENT RTCL Enter "help" to redisplay product names. Enter "quit" to exit. *********************************************************** ESE ****************************************************** Do you want to install the DB2 purescale Feature? [yes/no] no DB2 installation is being initialized. Total number of tasks to be performed: 46 Total estimated time for all tasks to be performed: 1383 second(s) Task #1 start Description: Checking license agreement acceptance Estimated time 1 second(s) Task #1 end Task #2 start Description: Base Client Support for installation with root privileges Estimated time 3 second(s) Task #2 end Task #3 start Description: Product Messages - English Estimated time 13 second(s) Task #3 end Task #4 start Description: Base client support Estimated time 235 second(s) Task #4 end Task #5 start Description: Java Runtime Support Estimated time 153 second(s) Task #5 end Task #6 start Description: Java Help (HTML) - English Estimated time 7 second(s) Task #6 end The execution completed successfully. Chapter 3. Installing and configuring the prerequisite software 47

60 For more information see the DB2 installation log at "/tmp/db2_install.log.20295". Note: The output above shows that the total number of tasks to be performed are 46. But the installation log shows 48 tasks. This is a known limitation. Results DB2 Server server is installed in /opt/db2/product/ directory. Setting up a DB2 instance A DB2 instance is an environment in which you store data and run applications. Before you begin You must have root user authority. Procedure Use the db2icrt command to create an instance by using the following steps: Run the following commands: cd /opt/db2/product/10.1.0/instance./db2icrt -u db2fenc db2 Updating /etc/services file Port usage is specified in the /etc/services file Procedure Ensure that the DB2_db2 service-name is using port If not, update the following line in /etc/services file: DB2_db /tcp Note: Tivoli Netcool Performance Manager uses as the default port number. If you want to use anything other than 50000, then you must specify that port number at every place during Tivoli Netcool Performance Manager installation. DB2 instance variable registry settings Procedure 1. Log in as db2 (instance user). 2. Run the following commands to set listening port and communication protocol: [db2@tnpminlnx0119 ~]$ db2 update dbm cfg using svcename DB20000I The UPDATE DATABASE MANAGER CONFIGURATION command completed successfully. [db2@tnpminlnx0119 ~]$ db2set db2comm=tcpip 48 IBM Tivoli Netcool Performance Manager: Installation Guide

61 Starting the DB2 instance Procedure 1. Log in as db2 (instance user). 2. Run the following commands to start the instance if it is not already running: db2start Results If instance is not running, then the command will start the instance. If it is already running, then you may receive following message: The database manager is already active Installing IBM Data Server Client (64-bit) In a Tivoli Netcool Performance Manager system, you must install the IBM Data Server Client (64-bit) in a distributed environment only. It is not required in a stand-alone environment. In a distributed environment, install the DB2 client software on each server where you plan to install a Tivoli Netcool Performance Manager component, except for the system where you installed the DB2 server. About this task Instructions on how to install the IBM Data Server Client (64-bit). Downloading the IBM Data Server Client distribution to disk The DB2 installation files must be in place before you can begin the installation of DB2. Before you begin Before you begin this task, make sure that you have: v Downloaded the Tivoli Netcool Performance Manager distribution to disk. The directory to which the Tivoli Netcool Performance Manager distribution is downloaded is referred to as <DIST_DIR>. About this task DB2 client installation is performed on distributed environment only. Procedure 1. Log in as root. 2. Create a directory to hold the contents of the IBM Data Server Client distribution. For example: # mkdir /var/tmp/db2setup Download the IBM Data Server Client files to the /var/tmp/db2setup1010 directory. 4. Extract the IBM Data Server Client distribution files that now exist in the /var/tmp/db2setup1010 directory. The directory to which the IBM Data Server Client distribution is downloaded is referred to as <DB2_DIST_DIR>. Chapter 3. Installing and configuring the prerequisite software 49

62 Creating group and user IDs for Data Server Client installation One user and one group are required. About this task You require only the Instance owner. For more information about the Instance owner and user ID restrictions, see Creating group and user ID for a DB2 server installation on page 44. Procedure To create the required groups and user IDs for Data Server Client installation, follow these steps: 1. Log in as a user with root user authority. 2. To create groups on Linux operating systems, enter the following commands: Note: These command line examples do not contain passwords. They are examples only. You can use the passwd username command from the command line to set the password. groupadd db2iadm 3. Create users for each group by using the following commands: useradd -g db2iadm -m -d /opt/db2 db2 4. Set initial password by using the following commands: passwd db2 Changing password for user db2. New UNIX password: db2 BAD PASSWORD: it is WAY too short Retype new UNIX password: db2 passwd: all authentication tokens updated successfully. 5. Relax the permission on home directory of db2 user as we want to install the Data Server Client inside it by using the following command: chmod 707 /opt/db2 Installing IBM Data Server Client (64-bit) To start the DB2 Setup wizard, use these steps. Procedure 1. Change to the directory where the DB2 database product distribution is copied by entering the following command: cd <DB2_DIST_DIR> 2. If you have downloaded the DB2 database product image, extract the product file by using the following commands: gzip <product>.tar.gz tar -xvf product.tar Where product is the name of the product that you downloaded. 3. Change directory to <DB2_DIST_DIR>/server. 4. Start DB2 installation by entering the following command:./db2_install You see the following output: 50 IBM Tivoli Netcool Performance Manager: Installation Guide

63 ./db2_install Default directory for installation of products - /opt/ibm/db2/v10.1 *********************************************************** Install into default directory (/opt/ibm/db2/v10.1)? [yes/no] no Enter the full path of the base installation directory: /opt/db2/product/ Specify one of the following keywords to install DB2 products. AESE ESE CONSV WSE EXP CLIENT RTCL Enter "help" to redisplay product names. Enter "quit" to exit. *********************************************************** CLIENT *********************************************************** DB2 installation is being initialized. Total number of tasks to be performed: 31 Total estimated time for all tasks to be performed: 807 second(s) Task #1 start Description: Checking license agreement acceptance Estimated time 1 second(s) Task #1 end Task #2 start Description: Base Client Support for installation with root privileges Estimated time 3 second(s) Task #2 end Task #3 start Description: Product Messages - English Estimated time 13 second(s) Task #3 end The execution completed successfully. For more information see the DB2 installation log at "/tmp/db2_install.log.20296". Results DB2 IBM Data Server Client client is installed in /opt/db2/product/ directory. Setting up a DB2 instance A DB2 instance is an environment in which you store data and run applications. Chapter 3. Installing and configuring the prerequisite software 51

64 Before you begin You must have root user authority. Procedure Use the db2icrt command to create an instance by using the following steps: Run the following commands: cd /opt/db2/product/10.1.0/instance./db2icrt -s client db2 DB2 catalog settings Procedure 1. Log in with the instance user that is db2. 2. Run the following commands: db2 catalog tcpip node <node_name> remote <DB2_Server_Host> server db2 catalog db <DB_NAME> at node <node_name> Note: Provide hostname or IP address of DB2 server in the command instead of <DB2_Server_Host>. Note: When Provide database name (by default, PV) in the command instead of <DB_NAME> and <node_name> can be anything as per your choice. For example, NodeA. Results When you complete the catalog settings, you might receive the following warning message: DB21056W Directory changes mat not be effective until the directory cache is refreshed You can ignore this message. Copying or overwriting extra lib32 files You must copy two extra lib32 files after you install the IBM Data Server Client. These files can be obtained from the installation media, and then copied to the <DATABASE_HOME>/lib32 directory. About this task Copy the libdb2ci.so.1 and libdb2.so.1 files to the <DATABASE_HOME>/lib32 directory. Procedure 1. Log in as root user and run following commands: Note: Thelibdb2.so.1 file exists in the <DATABASE_HOME>/lib32 directory. But you must overwrite with the file from the installation media. cp <DIST_DIR>/proviso/RHEL/DataBase/RHEL5/db2/instance/libdb2ci.so.1 <DATABASE_HOME>/lib32/. cp <DIST_DIR>/proviso/RHEL/DataBase/RHEL5/db2/instance/libdb2.so.1 <DATABASE_HOME>/lib32/. Where, 52 IBM Tivoli Netcool Performance Manager: Installation Guide

65 The <DIST_DIR> is the directory path where you downloaded the Tivoli Netcool Performance Manager distribution. For example,/var/tmp/cdproviso/. The <DATABASE_HOME> is the directory path where you installed IBM Data Server Client. For example, /opt/db2/product/ Create soft links for these files by running the following commands: ln -s <DATABASE_HOME>/lib32/libdb2ci.so.1 <DATABASE_HOME>/lib32/libdb2ci.so ln -s <DATABASE_HOME>/lib32/libdb2.so.1 <DATABASE_HOME>/lib32/libdb2.so Next steps The steps that follow installation of the prerequisite software. After you have installed the prerequisite software, you are ready to begin the actual installation of Tivoli Netcool Performance Manager. Depending on the type of installation you require, follow the directions in the appropriate topic: v v v Chapter 5, Installing as a minimal deployment, on page 87 - Describes how to install Tivoli Netcool Performance Manager in a distributed production environment. Chapter 6, Modifying the current deployment, on page 93 - Describes how to install Tivoli Netcool Performance Manager as a minimal deployment, which is used primarily for demonstration or evaluation purposes. If you are planning to Install Tivoli Netcool Performance Manager as a distributed environment that uses clustering for high availability, review the Tivoli Netcool Performance Manager HA (High Availability) documentation, which is available for download by going to software/brandcatalog/ismlibrary/details?catalog.label=1tw10np54 and searching for Tivoli Netcool Performance Manager Wireline High Availability Solutions Documentation. Chapter 3. Installing and configuring the prerequisite software 53

66 54 IBM Tivoli Netcool Performance Manager: Installation Guide

67 Chapter 4. Installing in a distributed environment This section describes how to install Tivoli Netcool Performance Manager for the first time in a fresh, distributed environment. For information about installing the Tivoli Netcool Performance Manager components using a minimal deployment, see Chapter 5, Installing as a minimal deployment, on page 87. Distributed installation process The mains steps involved in a distributed installation. A production Tivoli Netcool Performance Manager system that generates and produces management reports for a real-world network is likely to be installed on several servers. Tivoli Netcool Performance Manager components can be installed to run on as few as two or three servers, up to dozens of servers. Copyright IBM Corp. 2006,

68 Before installing Tivoli Netcool Performance Manager, you must have installed the prerequisite software. For detailed information, see Chapter 3, Installing and configuring the prerequisite software, on page 29. In addition, you must have decided how you want to configure your system. Refer to the following sections: v Co-location rules on page 2 v Typical installation topology on page 7 v Appendix A, Remote installation issues, on page 139 The general steps used to install Tivoli Netcool Performance Manager are as follows: v Start the launchpad. v Install the Topology Editor. v Start the Topology Editor. v Create the topology. 56 IBM Tivoli Netcool Performance Manager: Installation Guide

69 v v v v Add the Tivoli Netcool Performance Manager components. Save the topology to an XML file. Start the deployer. Install Tivoli Netcool Performance Manager using the deployer. Starting the launchpad The following sections describe each of these steps in detail. Note: Before you start the installation, verify that all the database tests have been performed. Otherwise, the installation might fail. See Chapter 3, Installing and configuring the prerequisite software, on page 29 for information about tnsping. The steps required to start the launchpad. Procedure To start the launchpad, follow these steps: 1. Log in as root. 2. Set and export the DISPLAY variable. See Setting up a remote X Window display on page Set and export the BROWSER variable to point to your Web browser. For example: On Linux systems: # BROWSER=/usr/bin/firefox # export BROWSER Note: The BROWSER command cannot include any spaces around the equal sign. 4. Change directory to the directory where the launchpad resides. On Linux systems: #cd<dist_dir>/proviso/rhel <DIST_DIR> is the directory on the hard drive where you copied the contents of the Tivoli Netcool Performance Manager distribution. Form more information see Downloading the Tivoli Netcool Performance Manager distribution to disk on page Enter the following command to start the launchpad: #./launchpad.sh Installing the Topology Editor The steps that are required to install the Topology Editor. About this task Note: Only one instance of the Topology Editor can exist in the Tivoli Netcool Performance Manager environment. Install the Topology Editor on the same system that hosts database server. You can install the Topology Editor from the launchpad or from the command line. Chapter 4. Installing in a distributed environment 57

70 Procedure To install the Topology Editor, follow these steps: 1. You can begin the Topology Editor installation procedure from the command line or from the Launchpad. From the launchpad: a. On the launchpad, click the Install Topology Editor option in the list of tasks. b. On the Install Topology Editor page, click the Install Topology Editor link. From the command line: a. Log in as root. b. Change directory to the directory that contains the Topology Editor installation script: On Linux systems: # cd <DIST_DIR>/proviso/RHEL/Install/topologyEditor/Disk1/InstData/VM <DIST_DIR> is the directory on the hard disk where you copied the contents of the Tivoli Netcool Performance Manager distribution. Form more information see Downloading the Tivoli Netcool Performance Manager distribution to disk on page 39. c. Enter the following command: #./installer.bin 2. The installation wizard opens in a separate window, displaying a welcome page. Click Next. 3. Review and accept the license agreement, then click Next. 4. Confirm that the wizard is pointing to the correct directory. The default is /opt/ibm/proviso. If you have previously installed the Topology Editor on this system, the installer does not prompt you for an installation directory and instead uses the directory where you last installed the application. 5. Click Next to continue. 6. Confirm that the wizard is pointing to the correct base installation directory of the DB2 driver (/opt/db2/product/10.1.0/java), or click Choose to go to another directory. 7. Click Next to continue. 8. Review the installation information, then click Install. 9. When the installation is complete, click Done to close the wizard. The installation wizard installs the Topology Editor and an instance of the deployer in the following directories: Topology Editor Deployer Interface Directory install_dir/topologyeditor For example: /opt/ibm/proviso/topologyeditor install_dir//deployer For example: /opt/ibm/proviso/deployer 58 IBM Tivoli Netcool Performance Manager: Installation Guide

71 Results Starting the Topology Editor Creating a new topology The combination of the Topology Editor and the deployer is referred to as the primary deployer. For more information, see Resuming a partially successful first-time installation on page 84. Note: To uninstall the Topology Editor, follow the instructions in Uninstalling the Topology Editor on page 135. Do not delete the /opt/ibm directory. Doing so causes problems when you try to reinstall the Topology Editor. If the /opt/ibm directory is accidentally deleted, follow these steps: 1. Change to the /var directory. 2. Rename the hidden file.com.zerog.registry.xml (for example, rename it to.com.zerog.registry.xml.backup). 3. Reinstall the Topology Editor. 4. Rename the backup file to the original name (.com.zerog.registry.xml). After you have installed the Topology Editor, you can invoke it from either the launchpad or from the command line. Procedure v To start the Topology Editor from the launchpad: 1. If the Install Topology Editor page is not already open, click the Install Topology Editor option in the list of tasks to open it. 2. On the Install Topology Editor page, click the Start Topology Editor link. v To start the Topology Editor from the command line: 1. Log in as root. 2. Change directory to the directory in which you installed the Topology Editor. For example: # cd /opt/ibm/proviso/topologyeditor 3. Enter the following command: #./topologyeditor Note: If your DISPLAY environment variable is not set, the Topology Editor will fail with a Java assertion message (core dump). The steps required to create a new topology. Procedure 1. In the Topology Editor, select Topology > Create new topology. The New Topology window is displayed. 2. Enter the Number of resources to be managed by Tivoli Netcool Performance Manager. The default value is The size of your deployment affects the database sizing. Chapter 4. Installing in a distributed environment 59

72 3. Click Finish. The Topology Editor creates the following entities: v In the Logical view, five items are listed: Tivoli Netcool Performance Manager Topology, Cross Collector CMEs, DataChannels, DataMarts and Tivoli Integrated Portals. v In the Physical view, there is a new Hosts folder. Adding and configuring the Tivoli Netcool Performance Manager components Your next step is to add and configure the individual Tivoli Netcool Performance Manager components. Note: When performing an installation that uses non-default values, that is, non-default usernames, passwords and locations, it is recommended that you check both the Logical view and Physical view to ensure that they both contain the correct values before proceeding with the installation. Adding the hosts The first step is to specify all the servers that host Tivoli Netcool Performance Manager components. About this task Each host that you define has an associated property named PV User. The PV User is the default operating system user for all Tivoli Netcool Performance Manager components. You can override this setting in the Advanced Properties tab when you set the deployment properties for individual components (for example, DataMart and DataView). This allows you to install and run different components on the same system as different users. Note: DataChannel components always use the default user that is associated with the host. The user account that is used to transfer files by using FTP or SCP/SFTP during installation is always the PV User defined at the host level, rather than component level. Procedure To add a single host to the topology, follow these steps: 1. In the Physical view, right-click the Hosts folder and select Add Host from the menu. The Add Host window opens. 2. Specify the details for the host machine. The fields are as follows: v Host name - Enter the name of the host (for example, delphi). v Operating system - Specifies the operating system (). This field is filled in for you. v DB2 home - Specifies the default <DATABASE_HOME> directory for all Tivoli Netcool Performance Manager components that are installed on the system (by default, /opt/db2/product/10.1.0). 60 IBM Tivoli Netcool Performance Manager: Installation Guide

73 v v PV User - Specifies the default Tivoli Netcool Performance Manager UNIX user (for example, pvuser) for all Tivoli Netcool Performance Manager components that are installed on the system. PV user password - Specifies the password for the default Tivoli Netcool Performance Manager user (for example, PV). v Create Disk Usage Server for this Host? - Selecting this check box creates a DataChannel subcomponent to handle disk quota and flow control. If you have not chosen to create a Disk Usage Server, click Finish to create the host. The Topology Editor adds the host under the Hosts folder in the Physical view. If you have chosen to create a Disk Usage Server, click Next and the Add Host window allows you to add details for your Disk Usage Server. 3. Specify the details for the Disk Usage Server. The fields are as follows: Field Local Root Directory Remote Root Directory FC FSLL FC QUOTA Remote User Remote User Password Secure file transfer to be used Port Number Description The local DataChannel root directory. This property allows you to differentiate between a local directory and a remote directory mounted to allow for FTP access. Remote directory that is mounted for FTP access. This property allows you to differentiate between a local directory and a remote directory mounted to allow for FTP access. This is the Flow Control Free Space Low Limit property. When this set limit is reached the Disk Usage Server contacts all components who reside in this root directory and tell them to free up all space possible. This is the Flow Control Quota property. This property allows you to set the amount of disk space in bytes available to Tivoli Netcool Performance Manager components on this file system. User account that is used when attempting to access this Disk Usage Server remotely. User account password that is used when attempting to access this Disk Usage Server remotely. Boolean indicator identifying if ssh must be used when attempting to access this directory remotely. Port number to use for remote access (sftp) in case it is a non-default port. 4. Click Finish to create the host. The Topology Editor adds the host under the Hosts folder in the Physical view. Note: The DataChannel properties are filled in automatically at a later stage. Adding multiple hosts You might want to add multiple hosts at one time. Chapter 4. Installing in a distributed environment 61

74 About this task To add multiple hosts to the topology: Procedure 1. In the Physical view, right-click the Hosts folder and select Add Multiple Host from the menu. The Add Multiple Hosts window opens. 2. Add new hosts by typing their names into the Host Name field as a comma-separated list. 3. Click Next. 4. Configure all added hosts. The Configure hosts dialog allows you to enter configuration settings and apply these settings to one or more of the specified host set. To apply configuration settings to one or more of the specified host sets: a. Enter the appropriate host configuration values. All configuration options are described in Steps 2, and 3 of the previous process, Adding the hosts on page 60. b. Select the check box opposite each of the hosts to which you want to apply the entered values. c. Click Next. The hosts for which all configuration settings have been specified disappear from the set of selectable hosts. d. Repeat steps a, through c until all hosts are configured. 5. Click Finish. Adding a database configurations component The Database Configurations component hosts all the database-specific parameters. About this task You define the parameters once, and their values are propagated as needed to the underlying installation scripts. Procedure To add a Database Configurations component, follow these steps: 1. In the Logical view, right-click the Tivoli Netcool Performance Manager Topology component and select Add Database Configurations from the menu. The host selection window opens. 2. You must add the Database Configuration component to the same server that hosts the DB2 server. For example, delphi). Select the appropriate host by using the list. 3. Click Next to configure the mount points for the database. 4. Add the correct number of mount points. To add a mount point, click Add Mount Point. A new, blank row is added to the window. Fill in the fields as appropriate for the new mount point. 5. Enter the required configuration information for each mount point. a. Enter the mount point location: v Mount Point Directory Name (for example, /raid_2/db2data). Note: The mount point directories can be named by using any string as required by your organizations naming standards. 62 IBM Tivoli Netcool Performance Manager: Installation Guide

75 v Used for Metadata Tablespaces? (A check mark indicates True). v Used for Temporary Tablespaces? (A check mark indicates True). v Used for Metric Tablespaces? (A check mark indicates True). v Used for System Tablespaces and Redo? (A check mark indicates True). b. Click Back to return to the original page. c. Click Finish to create the component. The Topology Editor adds the new Database Configurations component to the Logical view. 6. Highlight the Database Configurations component to display its properties. Review the property values to make sure that they are valid. For the complete list of properties for this component, see the IBM Tivoli Netcool Performance Manager: Property Reference Guide. The Database Configurations component has the following subelements: v Channel tablespace configurations v Database Channels v Database Clients configurations v Tablespace configurations v Temporary tablespace configurations Note: Before you actually install Tivoli Netcool Performance Manager, verify that both the raid_2/db2data and raid_3/db2data directory structures are created. Adding a DataMart The steps that are required to add a DataMart component to your topology. About this task Tivoli Netcool Performance Manager DataMart is normally installed on the same server on which you installed DB2 server and the Tivoli Netcool Performance Manager database configuration. However, there is no requirement that forces DataMart to reside on the database server. Note the following: v If you are installing DataMart on Linux system, you must add the IBM JRE to the PATH environment variable for the Tivoli Netcool Performance Manager UNIX user, pvuser. v You must ensure that you are using the IBM JRE and not the RHEL JRE. The IBM JRE is supplied with the Topology Editor or with Tivoli Integrated Portal. To ensure you are using the right JRE, you can either: Set the JRE path to conform to that used by the Topology Editor, do this using the following commands (using the default location for the primary deployer): PATH=/opt/IBM/proviso/topologyEditor/jre/bin:$PATH export $PATH For a remote server, that is one that does not host the primary deployer, you must download and install the required JRE, and set the correct JRE path. See the IBM Tivoli Netcool Performance Manager: Configuration Recommendations Guide document for JRE download details. Chapter 4. Installing in a distributed environment 63

76 Procedure To add a DataMart component, follow these steps: 1. In the Logical view, right-click the DataMarts folder and select Add DataMart from the menu. The host selection host window is displayed. 2. Using the list of available hosts, select the machine on which DataMart must be installed (for example, delphi). 3. Click Finish. The Topology Editor adds the new DataMart x component (for example, DataMart 1) under the DataMarts folder in the Logical view. 4. Highlight the DataMart x component to display its properties. Review the property values to make sure that they are valid. You can specify an alternate installation user for the DataMart component by changing the values of the USER_LOGIN and USER_PASSWORD properties in the Advanced Properties tab. For the complete list of properties for this component, see the IBM Tivoli Netcool Performance Manager: Property Reference Guide. Event notification scripts When you install the DataMart component, two event notification scripts are installed. The scripts are called as needed by table space size checking routines in DB2 and in Tivoli Netcool Performance Manager, if either routine detects low disk space conditions on a disk partition that is hosting a portion of the Tivoli Netcool Performance Manager database. Both scripts by default send their notifications by to a local login name. The two files and their installation locations are as follows: v The script that is installed as $DB2_BASE/tnpm_dbadmin/db2/bin notifies the login name db2 by of impending database space problems. This script is called as needed by a DB2 routine that periodically checks for available disk space. v The script that is installed as /opt/datamart/bin/notifydbspace notifies the login name pvuser of the same condition. This script is called as needed by the Hourly Loader component of Tivoli Netcool Performance Manager DataChannel. The loader checks for available disk space before attempting its hourly upload of data to the database. Either file can be customized to send its warnings to a different address on the local machine, to an SMTP server for transmission to a remote machine, or to send the notices to the local network's SNMP fault management system (that is, to an SNMP trap manager). You can modify either script to send notifications to an SNMP trap, instead of, or in addition to its default notification. Adding a Discovery Server The Discovery Server is the Tivoli Netcool Performance Manager component responsible for SNMP discovery. About this task You can add a discovery server for each DataMart defined in the topology. Procedure To add a Discovery Server, follow these steps: 64 IBM Tivoli Netcool Performance Manager: Installation Guide

77 In the Logical view, right-click the DataMart x folder and select Add Discovery server from the menu. The Topology Editor displays the new Discovery Server under the DataMart n folder in the Logical view. Adding multiple Discovery Servers The steps required to add multiple Discovery servers. About this task If you want to run multiple Discovery servers on multiple hosts in your environment, you must perform additional steps at deployment to make sure that each host system contains identical inventory files and identical copies of the inventory hook script. IBM recommends that you only use identically-configured instances of the Discovery Server. The inventory files used by the Discovery Server are configuration files named inventory_elements.txt and inventory_subelements.txt. These files are located in the $PVMHOME/conf directory of the system where you install the DataMart component. Some technology packs provide custom sub-elements inventory files with names different from inventory_subelements.txt that are also used by the Discovery Server. Procedure To add multiple Discovery Servers, do the following: v Install the primary instance of DataMart and the Discovery Server on one target host system. v Install and configure any required technology packs on the primary host. You modify the contents of the inventory files during this step. v Install secondary instances of DataMart and the Discovery Server on corresponding target host systems. v Replicate the inventory files from the system where the primary instance of DataMart is running to the $PVMHOME/conf directory on the secondary hosts. You must also replicate the InventoryHook.sh script that is located in the $PVMHOME/bin directory and any other files that this script requires. Adding a Tivoli Integrated Portal The Tivoli Integrated Portal provides an integrated console for users to log on and view information that is contained on the DataView server. Procedure To add a Tivoli Integrated Portal, follow these steps: 1. In the Logical view, right-click on the Tivoli Integrated Portals folder and select Add TIP from the menu. The Configure TIP Wizard is displayed. 2. The Topology Editor gives you the choice of adding an already existing Tivoli Integrated Portal to the topology or to create a new Tivoli Integrated Portal. To create a new Tivoli Integrated Portal, select the Create a new TIP radio button. To import an already existing Tivoli Integrated Portal into the topology, select the Import existing TIPs from host radio button. 3. Using the list of available hosts, select the host on which Tivoli Integrated Portal must be installed (for example, delphi). Chapter 4. Installing in a distributed environment 65

78 Note: The hostname of the host that is selected for the Tivoli Common Reporting installation must not contain underscores. Underscores in the hostname causes the installation of Tivoli Common Reporting to fail. 4. Click Finish. The Topology Editor adds the new Tivoli Integrated Portal component to the Logical view. 5. Highlight the Tivoli Integrated Portal component to display its properties. 6. Review the other property values to make sure that they are valid. For the complete list of properties for this component, see the IBM Tivoli Netcool Performance Manager: Property Reference Guide. Discovering existing Tivoli Integrated Portals How to update your topology so it sees existing Tivoli Integrated Portal (TIP) instances on your system. About this task This step runs an asynchronous check for existing Tivoli Integrated Portals on each selected DataView host. If a Tivoli Integrated Portal is discovered to exist on a host. The discovered Tivoli Integrated Portal detail is added to the topology. Procedure To discover existing Tivoli Integrated Portals, follow these steps: 1. In the Physical view, right-click the Hosts folder and select Add Host from the menu. Add the host that has an existing Tivoli Integrated Portal you wish to discover. 2. Go to the Logical view, right-click on the Tivoli Integrated Portals folder and select Import existing TIPs from host from the menu. The Run TIP Discovery Wizard Page is displayed. 3. Select the check box for each host on which you would like to perform Tivoli Integrated Portal discovery. 4. Click Import TIP. If the discovered Tivoli Integrated Portal is an old version, it is flagged within the topology for upgrade. Any DataView without a Tivoli Integrated Portal is flagged within the topology for Tivoli Integrated Portal installation on that host. The deployer will take the appropriate action when run. You will find that the discovered Tivoli Integrated Portal status as [TCR Found: <TIP Location>. 5. Click Next. 6. Configure Tivoli Integrated Portal properties. a. Enter the appropriate host configuration values. v TCR_INSTALLATION_DIRECTORY: This is the directory in which Tivoli Common Reporting is installed. v TIP_INSTALLATION_DIRECTORY: This is the directory in which Tivoli Integrated Portal is installed. v WAS_USER_NAME: This is the WAS user name. v WAS_PASSWORD: This is the WAS password. If you would like to configure LDAP for Tivoli Integrated Portal, please see Appendix F, LDAP integration, on page IBM Tivoli Netcool Performance Manager: Installation Guide

79 b. Select the check box opposite each of the Tivoli Integrated Portal hosts to which you want to apply the entered values. c. Click Next. The hosts for which all configuration settings have been specified disappear from the set of selectable hosts. d. Repeat steps a, b and c till all hosts are configured. 7. Click Next to add the discovered Tivoli Intergrated Portals to the topology. Note: If you discover a Tivoli Common Reporting/Tivoli Intergrated Portal of version 2.1 that was installed using the Tivoli Common Reporting installer and not the Tivoli Netcool Performance Manager installer, the port will not align with a Technology Pack automatically. To align the port numbers you must specify the Tivoli Intergrated Portal port when performing the Technology Pack installation. Adding a DataView How to add a DataView. About this task Note: To display DataView real-time charts, you must have the Java runtime environment (JRE) installed on the browser where the charts are to be displayed. You can download the JRE from the Sun download page at Note: If you are reusing an existing Tivoli Integrated Portal that was installed by a user other than root, the default deployment of DataView encounters problems. To avoid these problems, you must remove the offending Tivoli Integrated Portal from your topology and add both the Tivoli Integrated Portal and DataView as a separate post deployment step. The steps that you must follow to install DataView reusing an existing Tivoli Integrated Portal are outlined in the sections: v v Installing DataView with a non-root user on a local host and reusing Tivoli Integrated Portal on page 80 Installing DataView with a non-root user on a remote host and reusing Tivoli Integrated Portal on page 82 To add a DataView component: Procedure In the Logical view, right-click on a Tivoli Integrated Portal and select Add DataView from the menu. The DataView is automatically added inheriting its properties from the Tivoli Integrated Portal instance. Add the DataChannel administrative components The steps required to add DataChannel Administrative components. Procedure 1. In the Logical view, right-click the DataChannels folder and select Add Administrative Components from the menu. The host selection window opens. 2. Using the drop-down list of available hosts, select the machine that you want to be the Channel Manager host for your DataChannel configuration (for example, corinth). 3. Click Finish. Chapter 4. Installing in a distributed environment 67

80 The Topology Editor adds a set of new components to the Logical view: Channel Manager Enables you to start and stop individual DataChannels and monitor the state of various DataChannel programs. There is one Channel Manager for the entire DataChannel configuration. The Channel Manager components are installed on the first host you specify Corba Naming Server Provides near real-time data to DataView. High Availability Managers This is mainly used for large installations that want to use redundant SNMP collection paths. The HAM constantly monitors the availability of one or more SNMP collection hosts, and switches collection to a backup host (called a spare) if a primary host becomes unavailable. Log Server Used to store user, debug, and error information. Plan Builder Creates the metric data routing and processing plan for the other components in the DataChannel. Custom DataChannel properties These are the custom property values that apply to all DataChannel components. Global DataChannel properties These are the global property values that apply to all DataChannel components. Adding a DataChannel A DataChannel is a software module that receives and processes network statistical information from both SNMP and non-snmp (BULK) sources. About this task This statistical information is then loaded into a database where it can be queried by SQL applications and captured as raw data or displayed on a portal in a variety of reports. Typically, collectors are associated with technology packs, a suite of Tivoli Netcool Performance Manager programs specific to a particular network device or technology. A technology pack tells the collector what kind of data to collect on target devices and how to process that data. See the Pack Installation and Configuration Guide for detailed information about technology packs. Procedure To add a DataChannel, follow these steps: 1. In the Logical view, right-click the DataChannels folder and select Add DataChannel from the menu. The Configure the DataChannel window is displayed. 2. Using the list of available hosts, select the machine that will host the DataChannel (for example, corinth). 3. Accept the default channel number (for example, 1). 4. Click Finish. 68 IBM Tivoli Netcool Performance Manager: Installation Guide

81 The Topology Editor adds the new DataChannel (for example, DataChannel 1) to the Logical view. 5. Highlight the DataChannel to display its properties. Note that the DataChannel always installs and runs as the default user for the host (the Tivoli Netcool Performance Manager UNIX username, pvuser). Review the other property values to make sure they are valid. For the complete list of properties for this component, see the IBM Tivoli Netcool Performance Manager: Property Reference Guide. The DataChannel has the following subelements: v Daily Loader x - Processes 24 hours of raw data every day, merges it together, then loads it into the database. The loader process provides statistics on metric channel tables and metric tablespaces. v Hourly Loader x - Reads files output by the Complex Metric Engine (CME) and loads the data into the database every hour. The loader process provides statistics on metric channel tables and metric tablespaces. The Topology Editor includes the channel number in the element names. For example, DataChannel 1 would have Daily Loader 1 and File Transfer Engine 1. Note: When you add DataChannel x, the Problems view shows that the Input_Components property for the Hourly Loader is blank. This missing value will automatically be filled in when you add a DataLoad collector (as described in the next section) and the error will be resolved. Separating the data and executable directories You may wish to separate the data and executable directories for your DataChannel About this task Note: Separating the data and executable directories is only possible during the first install activity. After the installation, you cannot modify the topology to separate the data and the executable directories. If you wish to separate the data and executable directories for your DataChannel, follow these steps: Procedure 1. Create two directories on the DataChannel host, for example, DATA_DIR to hold the data and EXE_DIR to hold the executable. 2. Change the LOCAL_ROOT_DIRECTORY value on that host's Disk Usage Server to the data root folder DATA_DIR. In the Host advanced properties you will see the DATA_DIR value propagated to all DC folder values for the host. 3. Change DC_ROOT_EXE_DIRECTORY to the executable directory EXE_DIR. This change will propagate to the DC conf directory, the DataChannel Bin Directory and the Datachannel executable file name. Note: For advanced information about DataChannels, see Appendix B, DataChannel architecture, on page 143. Adding a DataChannel Remote (DCR) A DataChannel is a software module that receives and processes network statistical information from both SNMP and non-snmp (BULK) sources. A DataChannel remote is a DataChannel installation configuration in which the subchannel, CME Chapter 4. Installing in a distributed environment 69

82 and FTE components are installed and run on one host, while the Loader components are installed and run on another host. About this task In a DataChannel remote configuration, the subchannel hosts can continue processing data and detecting threshold violations, even while disconnected from the Channel Manager server. The following task assumes that you are placing the LDR and DLDR on the current host, called, for example, hostname1; and that you are placing the subchannel, CME and FTE, on another host, called, for example, hostname2. Procedure To add a remote DataChannel, follow these steps: 1. If it is not already open, open the Topology Editor (see Starting the Topology Editor on page 59). 2. In the Topology Editor, select Topology > Open existing topology. The Open Topology window is displayed. 3. For the topology source, select From database and click Next. 4. In the Physical View. Add a host, hostname2, to the downloaded topology. 5. In the Logical view, right-click the DataChannels folder and select Add DataChannel from the menu. The Configure the DataChannel window is displayed. 6. Using the list of available hosts, select the machine that hosts the DataChannel, hostname1. 7. Accept the default channel number (for example, 1). 8. Click Finish. The Topology Editor adds the new DataChannel (for example, DataChannel2) to the Logical view. 9. Right-click on the new DataChannel, DataChannel2, and select Add SNMP Collector. 10. Select server hostname2 as the host. The Collector 2.2 is added. 11. Right-click on the Complex Metric Engine.2.2 and choose Change Host. 12. Select server hostname2 as the host. The File Transfer Engine 2.2 is added to hostname2. Results The setup must look as follows: v DataChannel 2 - hostname1 v Collector hostname1 v Collector SNMP - hostname2 v Complex Metric Engine hostname2 v File Transfer Engine hostname2 v Daily Loader 2 - hostname1 v Hourly Loader 2 - hostname1 70 IBM Tivoli Netcool Performance Manager: Installation Guide

83 This accomplishes FTE and CME to be on one server and LDR and DLDR to be on another server. Adding a collector Collectors collect and process raw statistical data about network devices obtained from various network resources. The collectors send the received data through a DataChannel for loading into the Tivoli Netcool Performance Manager database. Note: Collectors do not need to be on the same machine as the DB2 server and DataMart. Collector types Collector types and their description, plus the steps required to associate a collector with a technology pack. About this task There are two types of collectors: SNMP collector Collects data using SNMP polling directly to network services. Specify this collector type if you plan to install a Tivoli Netcool Performance Manager SNMP technology pack. These technology packs operate in networking environments where the associated devices on which they operate use an SNMP protocol. Bulk DataLoad collector Imports data from files. The files can have multiple origins, including log files generated by network devices, files generated by SNMP collectors on remote networks, or files generated by a non-tivoli Netcool Performance Manager network management database. There are two types of bulk collectors: UBA BCOL A Universal Bulk Adapter (UBA) Collector that handles bulk input files generated by non-snmp devices. Specify this collector type if you plan to install a Tivoli Netcool Performance Manager UBA technology pack, including Alcatel 5620 NM, Alcatel 5620 SAM, and Cisco CWM. A bulk Collector that retrieves and interprets the flat file output of network devices or network management systems. This collector type is not recommended for Tivoli Netcool Performance Manager UBA technology packs, and is used in custom technology packs. If you are creating a UBA collector, you must associate it with a specific technology pack. For this reason, IBM recommends that you install the relevant technology pack before creating the UBA collector. Therefore, you would perform the following sequence of steps: Procedure 1. Install Tivoli Netcool Performance Manager, without creating the UBA collector. 2. Download and install the technology pack. 3. Open the deployed topology file to load the technology pack and add the UBA collector for it. Chapter 4. Installing in a distributed environment 71

84 Note: For detailed information about UBA technology packs and the installation process, see the Technology Pack Installation and Configuration Guide. Configure the installed pack by following the instructions in the pack-specific user's guide. Restrictions There are a number of collector restrictions that must be noted. Note the following restrictions: v The maximum collector identification number is 999. v There is no relationship between the channel number and the collector number (that is, there is no predefined range for collector numbers based on channel number). Therefore, collector 555 could be attached to DataChannel 7. v Each database channel can have a maximum of 40 subchannels (and therefore, 40 collectors). Creating an SNMP collector How to create an SNMP collector. Procedure To add an SNMP collector, follow these steps: 1. In the Logical view, right-click the DataChannel x folder. The pop-up menu lists the following options: Add Collector SNMP Creates an SNMP collector. Add Collector UBA Creates a UBA collector. Add Collector BCOL Creates a BCOL collector. This collector type is used in custom technology packs. DataMart must be added to the topology before a BCOL collector can be added. Select Add Collector SNMP. The Configure Collector window opens. 2. Using the drop-down list of available hosts on the Configure Collector window, select the machine that will host the collector. For example, corinth. 3. Accept the default collector number. For example, 1 4. Click Finish. The Topology Editor displays the new collector under the DataChannel x folder in the Logical view. 5. Highlight the collector to view its properties. The Topology Editor displays both the SNMP collector core parameters and the SNMP technology pack-specific parameters. The core parameters are configured with all SNMP technology packs. You can specify an alternate installation user for the SNMP collector by changing the values of the pv_user, pv_user_group and pv_user_password properties in the Advanced Properties tab. Review the values for the parameters to make sure they are valid. Note: For information about the core parameters, see the IBM Tivoli Netcool Performance Manager: Property Reference Guide. 72 IBM Tivoli Netcool Performance Manager: Installation Guide

85 Results The collector has two components: Complex Metric Enginex Perform calculations on the collected data. File Transfer Engine (FTE)x Transfers files from the collector's output directories and places them in the input directory of the CME. The FTE writes data to the file /var/adm/wtmpx on each system that hosts a collector. As part of routine maintenance, check the size of this file to prevent it from growing too large. Note: The Topology Editor includes the channel and collector numbers in the element names. For example, DataChannel 1 could have Collector SNMP 1.1, with Complex Metric Engine 1.1. and File Transfer Engine 1.1. Adding a Cross Collector CME The steps required to add a Cross Collector CME. Procedure 1. In the Logical view, right-click the Cross Collector CME folder and select Add Cross Collector CME from the menu. The Specify the Cross Collector CME details window is displayed. 2. Using the drop-down list of available hosts, select the machine that will host the Cross-Collector CME. For example, corinth. 3. Select the desired Disk Usage Server on the selected host. 4. Select the desired channel number. For example, Click Finish. The Topology Editor adds the new Cross-Collector CME (for example, Cross-Collector CME 2000) to the Logical view. 6. Highlight the Cross-Collector CME to display its properties. Note: The Cross-Collector CME always installs and runs as the default user for the host (the Tivoli Netcool Performance Manager UNIX username, pvuser). 7. Review the other property values to make sure they are valid. For the complete list of properties for this component, see the IBM Tivoli Netcool Performance Manager: Property Reference Guide 8. After running the deployer to install the Cross-Collector CME you will need to restart the CMGR process. Note: You will notice that dccmd start all will not start the Cross-Collector CME at this point. 9. You must first deploy a formula against the Cross-Collector CME using the DataChannel frmi tool. Run the frmi tool. The following is an example command: frmi ecma_formula.js -labels formula_labels.txt Where: v The format of formula_labels.txt is 2 columns separated by an "=" sign. v First column is Full Path to formula. v Second is the number of the Cross-Collector CME. Chapter 4. Installing in a distributed environment 73

86 v Saving the topology The file formula_labels.txt is of the format: Path_to_ECMA_formulas~Formula1Name=2000 Path_to_ECMA_formulas~Formula2Name=2001 Note: When a Cross-Collector CME (CC-CME) is installed on the system and formulas are applied against it, the removal of collectors that the CC-CME depends on is not supported. This is an exceptional case, that is, if you have not installed a CC-CME, collectors can be removed. Adding multiple Cross Collectors About this task To add multiple Cross Collectors: Procedure 1. In the Logical view, right-click the Cross Collector CME folder and select Add multiple Cross Collectors from the menu. The Add Cross Collector CME window is displayed. 2. Optional: Click Add Hosts to add to the set of Cross Collector hosts. Only hosts that have a DUS can be added. Note: It is recommended that you have 20 Cross Collector CMEs spread across the set of topology hosts. 3. Set the number of Cross Collector CMEs for the set of hosts, there are two ways you can do this: v Click Calculate Defaults to use the wizard to calculate the recommended spread across the added hosts. This will set the number of Cross Collector CMEs to the default value. v To manually set the number of cross collector for each host, use the drop-down menu opposite each host name. 4. Click Finish. When you are satisfied with the infrastructure, verify that all the property values are correct and that any problems have been resolved, then save the topology to an XML file. Procedure To save the topology as an XML file, follow these steps: 1. In the Topology Editor, select Topology then either Save Topology As or Save Topology. Click Browse to navigate to the directory in which to save the file. By default, the topology is saved as topology.xml file in the topologyeditor directory. 2. Accept the default value or choose another name or location, then click OK to close the file browser window. 3. The file name and path is displayed in the original window. Click Finish to save the file and close the window. You are now ready to deploy the topology file (see Starting the deployer on page 75). 74 IBM Tivoli Netcool Performance Manager: Installation Guide

87 Note: Until you actually deploy the topology file, you can continue making changes to it as needed by following the directions in Opening an existing topology file. See Chapter 6, Modifying the current deployment, on page 93 for more information about making changes to a deployed topology file. Note: Only when you begin the process of deploying a topology is it saved to the database. For more information, see the Deploying the topology on page 77. Opening an existing topology file As you create the topology, you can save the file and update it as needed. Starting the deployer About this task To open a topology file that exists but that has not yet been deployed: Procedure 1. If it is not already open, open the Topology Editor. For more information, see Starting the Topology Editor on page In the Topology Editor, select Topology > Open existing topology. The Open Topology window is displayed. 3. For the topology source, click local then use Browse to navigate to the correct directory and file. Once you have selected the file, click OK. The selected file is displayed in the Open Topology window. 4. Click Finish. The topology is displayed in the Topology Editor. 5. Change the topology as needed. The primary deployer is installed on the same machine as the Topology Editor. You first run the topology file on the primary deployer, and then run secondary installers on the other machines in the distributed environment. See Resuming a partially successful first-time installation on page 84 for more information about the difference between primary and secondary deployers. Note: Before you start the deployer, verify that all the database tests have been performed. Otherwise, the installation might fail. See Chapter 3, Installing and configuring the prerequisite software, on page 29 for more information. Primary deployer The steps required to run the primary deployer from the Topology Editor Procedure Click Run > Run Deployer for Installation. Note: When you use the Run menu options (install or uninstall), the deployer uses the last saved topology file, not the current one. Be sure to save the topology file before using a Run command. Chapter 4. Installing in a distributed environment 75

88 Secondary deployer A secondary deployer is only required if remote installation using the primary deployer is not possible. About this task For more information on why you may need to use a secondary deployer, see Appendix A, Remote installation issues, on page 139. To run a secondary deployer: Procedure v To run a secondary deployer from the launchpad: 1. On the launchpad, click Start the Deployer. 2. On the Start Deployer page, click the Start Deployer link. v To run a secondary deployer from the command line: 1. Log in as root. 2. Change to the directory containing the deployer within the downloaded Tivoli Netcool Performance Manager distribution: On Linux systems: #cd<dist_dir>/proviso/rhel/install/deployer/ <DIST_DIR> is the directory on the hard drive where you copied the contents of the Tivoli Netcool Performance Manager distribution in Downloading the Tivoli Netcool Performance Manager distribution to disk on page Enter the following command: #./deployer.bin Note: See Appendix D, Deployer CLI options, on page 163 for the list of supported command-line options. Pre-deployment check The deployer fails if the required patches listed in this file are not installed. About this task The Deployer performs a check on the operating system versions and that the minimum required packages are installed. The deployer checks for the files as listed in the relevant check_os.ini file. The check_os.ini can be found at: v The check_os.ini file detailing Linux requirements can be found at: /RHEL/Install/deployer/proviso/bin/Check/check_os.ini Procedure v To check if the required packages are installed, follow these steps: 1. Click Run > Run Deployer for Installation to start the deployer. 2. Select the Check prerequisites check box. 3. Click Next. The check will return a failure if any of the required files are missing. 76 IBM Tivoli Netcool Performance Manager: Installation Guide

89 v To repair a failure, follow these steps: 1. Log in as root. 2. Install the packages listed as missing. 3. (Linux only) If any openmotif package is listed as missing: Install the missing openmotif package and update the package DB using the command: # updatedb 4. Rerun the check prerequisites step. Deploying the topology How to deploy your defined topology. About this task The deployer displays a series of pages to guide you through the Tivoli Netcool Performance Manager installation. The installation steps are displayed in a table to run each step individually or to run all the steps at once. For more information about the deployer interface, see Primary deployer on page 75. Important: By default, Tivoli Netcool Performance Manager uses Monday to determine when a new week begins. If you want to specify a different day, you must change the FIRST_WEEK_DAY parameter in the Database Registry by using the dbregedit utility. This parameter can be changed when you first deploy the topology that installs your Tivoli Netcool Performance Manager environment, and it must be changed before the Database Channel is installed. For more information, see the Tivoli Netcool Performance Manager Registry and Space Management technote. If you must stop the installation, you can resume it later. For more information, see Resuming a partially successful first-time installation on page 84. Procedure To deploy the Tivoli Netcool Performance Manager topology, follow these steps: 1. The deployer opens, displaying a welcome page. Click Next to continue. 2. If you started the deployer from the launchpad or from the command line, enter the full path to your topology file, or click Choose to go to the correct location. Click Next to continue. Note: If you start the deployer from within the Topology Editor, this step is skipped. The database access window prompts for the security credentials. 3. Enter the host name (for example, delphi) and database administrator password (for example, PV), and verify the other values (port number, DB Name, and user name). If the database does not yet exist, these parameters must match the values that you specified when you created the database configuration component (see Adding a database configurations component on page 62). Click Next to continue. 4. The node selection window shows the target systems and how the files are transferred (see Secondary deployer on page 76 for an explanation of this window). The table has one row for each machine where at least one Tivoli Netcool Performance Manager component is installed. Chapter 4. Installing in a distributed environment 77

90 The default settings are as follows: v The Enable check box is selected. If this option is not selected, no actions are done on that machine. v The Check prerequisites check box is not selected, if selected scripts are run to verify that the prerequisite software has been installed. v Remote execution is enabled, by using both RSH and SSH. If remote execution cannot be enabled, due to a particular customer's security protocols, see Appendix A, Remote installation issues, on page 139 and Resuming a partially successful first-time installation on page 84. v File transfer by using FTP is enabled. If wanted, reset the values as appropriate for your deployment. Click Next to continue. 5. Provide media location details. The Tivoli Netcool Performance Manager Media Location for components window is displayed, listing component and component platform. a. Click the Choose the Proviso Media button. You are asked to provide location of the media for each component. b. Enter the base directory in which your media is located. If any of the component media is not within the directory that is specified, you are asked to provide media location detail for that component. 6. The deployer displays summary information about the installation. Review the information, then click Next. The deployer displays the table of installation steps (see Pre-deployment check on page 76 for an overview of the steps table). Note the following: v Regardless of whether the steps are run, or if they pass or fail, closing the wizard results in the topology to be posted to the Tivoli Netcool Performance Manager database, assuming it exists. v If an installation step fails, see Resuming a partially successful first-time installation on page 84 for debugging information. Continue the installation by following the instructions in Resuming a partially successful first-time installation on page 84. v If the Tivoli Common Reporting installation step fails, which can happen when there is not enough space available in /usr and /tmp or directory cleanup has not been carried out, run the tcrclean.sh script. To run this script: a. Copy the tcrclean.sh script from the Primary deployer (host where the Topology Editor is installed) to the server where the Tivoli Common Reporting installation step fails. The tcrclean.sh script can be found on the Primary Deployer in the directory: /opt/ibm/proviso/deployer/proviso/bin/util/ b. Run tcrclean.sh. c. When prompted, enter the installation location of Tivoli Common Reporting. d. Continue the installation by following the instructions in Resuming a partially successful first-time installation on page Click Run All to run all the steps in sequence. 8. The deployer prompts you for the location of the setup files. Use the file selection window to go to the top-level directory for your operating system to avoid further prompts. 78 IBM Tivoli Netcool Performance Manager: Installation Guide

91 For example: <DIST_DIR>/RHEL/ <DIST_DIR> is the directory on the hard disk where you copied the contents of the Tivoli Netcool Performance Manager distribution in Downloading the Tivoli Netcool Performance Manager distribution to disk on page 39. Note: This assumes that the Tivoli Netcool Performance Manager distribution was downloaded to the folder /var/tmp/cdproviso as per the instructions in Downloading the Tivoli Netcool Performance Manager distribution to disk on page 39. If Tivoli Integrated Portal is configured to install on a remote host, the Run remote Tivoli Integrated Portal installation step is included. This step prompts the user to enter the root password. The deployer requires this information to run as root on the remote host and do the Tivoli Integrated Portal installation. 9. When all the steps are completed successfully, click Done to close the wizard. 10. To stop and start Tivoli Common Reporting, follow these steps: a. Go to the /tip_install_dir/products/tcr/bin/ directory. b. Set the DATABASE_HOME environment variable. For example: DATABASE_HOME=/opt/db2/product/ export DATABASE_HOME c. Run the following commands: LD_LIBRARY_PATH= $DATABASE_HOME/lib32:$LD_LIBRARY_PATH export LD_LIBRARY_PATH d. Run the following scripts: v stoptcrserver.sh <username> <password> v starttcrserver.sh Note: These scripts must be run every time Tivoli Integrated Portal is restarted. Note: The Topology Editor must be closed after every deployment. Install a libcrypto.so For full SNMPv3 support, SNMP DataLoad must have access to the libcrypto.so. About this task Note: As libcrypto.so is delivered as standard on Linux platforms, steps 1 and 2 are not required if you are running on Linux. For each new and existing SNMP DataLoad, you must perform the following steps. Procedure 1. Install the OpenSSL package. This package can be downloaded from 2. As root, extract and install the libcrypto.so file using the following code: # cd /usr/lib # ar -xv./libcrypto.a # ln -s libcrypto.so libcrypto.so 3. Update the dataload.env file so that the LD_LIBRARY_PATH ( Linux) or LIBPATH environment variables include the path: /usr/lib Chapter 4. Installing in a distributed environment 79

92 What to do next Check the variable has been set by doing the following: 1. Open a fresh shell 2. Check the dataload.env file. 3. Bounce the SNMP DL Upon startup, with a valid library, the collector will log the following log messages: INFO:CRYPTOLIB_LOADED Library libcrypto.so (OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008, 0x90802f) has been loaded. INFO:SNMPV3_SUPPORT_OK Full SNMPv3 support Auth(None,MD5,SHA-1) x Priv(None,DES,AES) is available. Installing DataView with a non-root user on a local host and reusing Tivoli Integrated Portal If you are reusing an existing Tivoli Integrated Portal that was installed by a non-root, the default deployment of DataView might encounter problems. About this task This procedure describes how to install DataView to a local host if you decide to reuse an existing Tivoli Integrated Portal that was installed by a non-root user. Procedure 1. Install Topology Editor on the local host. 2. Run the steps that are required to discover existing Tivoli Integrated Portals, as described in Discovering existing Tivoli Integrated Portals on page Configure Tivoli Integrated Portal and DataView in the Topology Editor on the local host. The values of the parameters for Tivoli Integrated Portal in the Topology Editor must be the same as the Tivoli Integrated Portal previously installed by OMNIbus. For example, check that the values of USER_INSTALL_DIR and IAGLOBAL_WC_adminhost in the Topology Editor correspond to the Tivoli Integrated Portal installed by OMNIbus. 4. Run the deployer for installation from the Topology Editor. 5. Go through the screens as per usual to the last run steps screen. 6. Mark the Install DataView and Register DataView as held. Note: If Tivoli Integrated Portal Install steps are noticed, just mark the steps as Success. 7. Mark all other steps as ready. 8. Run the deployer so that all steps except Install DataView and Register DataView have run and have a status of success. 9. Change to the installer directory in the Tivoli Netcool Performance Manager media. For example,./proviso/rhel/install. 10. Change to the DataView directory containing sample DataView file: dvinstall.cfg. For example,./deployer/proviso/data/deploymentpackage/ DeploymentSteps/DataView/templateDV. 11. Manually configure the dvinstall.cfg for your environment. Right-click the Install DataView step in the deployer and open the properties tab so that the values for DataView installation are displayed. Use these values to populate the dvinstall.cfg. See sample dvinstall.cfg provided below. 80 IBM Tivoli Netcool Performance Manager: Installation Guide

93 12. Change to the DataView directory that contains the install.sh in the Tivoli Netcool Performance Manager media. For example, on a Linux environment this is:../proviso/rhel/dataview/rhel5 13. In a command terminal, as the same non-root user that installed OMNIbus, set the PATH variable by using the command: export PATH=/opt/IBM/tivoli/tip/java/bin:${PATH} Note: Check that the Java path is correct. The /opt/ibm/tivoli/tip/java/bin must exist. 14. In the same terminal that you set the PATH variable in, run the command to silently install DataView as the non-default user pointing to the dvinstall.cfg file. For example, use the following command:./install.sh -i silent -f dvinstall.cfg. 15. After the previous step has completed source the Tivoli Integrated Portal url and check that DataView exists. For example, must be visible in Performance > Network Resources > Defined Resource Views 16. Mark the Install DataView as success. 17. Mark the Register DataView step as ready. 18. Run the Register DataView step. 19. Click done on the deployer. Example Sample dvinstall.cfg: # # Licensed Materials - Property of IBM # 5724-P55, 5724-P57, 5724-P58, 5724-P59 # Copyright IBM Corporation All Rights Reserved. # US Government Users Restricted Rights- Use, duplication or disclosure # restricted by GSA ADP Schedule Contract with IBM Corp. # # Unix user name for the TIP installation location USER_INSTALL_DIR=/opt/IBM/tivoli/tip # Unix user name for the TIP administrative user TIP_ADMINISTRATOR=tipadmin # Unix user name for the TIP administrative user TIP_ADMINISTRATOR_PASSWORD=tipadmin # location of the DB2 driver DB2_CLIENT_DIR=/opt/db2/product/ # connection url to the database TNPM_DATABASE_URL=jdbc:db2://VOIPDEV3:50000/PV # name of a valid database user (metric account: PV_LOUIS) TNPM_DATABASE_USER=PV_LOIS # password of the database user TNPM_DATABASE_USER_PASSWORD=PV # name of the context root of the web app DATAVIEW_CONTEXT=PV # If true then TIP is restarted after DataView is installed. Default is TRUE if RESTART_TIP is not set RESTART_TIP=yes Chapter 4. Installing in a distributed environment 81

94 Installing DataView with a non-root user on a remote host and reusing Tivoli Integrated Portal If you are reusing an existing Tivoli Integrated Portal that was installed by a non-root user, the default deployment of DataView might encounter problems. About this task This procedure describes how you install DataView to a remote host must you decide to reuse an existing Tivoli Integrated Portal that was installed by a non-root user. Procedure 1. Install Topology Editor on the local host. 2. Run the steps that are required to discover existing Tivoli Integrated Portals, as described in Discovering existing Tivoli Integrated Portals on page Add DataView to the discovered Tivoli Integrated Portal on the remote host by using the Topology Editor. 4. Run the deployer for installation from the Topology Editor. 5. Go through the screens as per usual to the last run steps screen. 6. Mark the Run Remote DataView Install and Register Remote DataView as held. Note: If Tivoli Integrated Portal Install steps are noticed, mark the steps as Success. 7. Mark all other steps as ready including the Register DataView and Prepare Remote DataView step. 8. Run the deployer so that all steps except Run Remote DataView Install and Register Remote DataView have run and have a status of success. The Prepare Remote DataView Install places the DataView files and configuration files in the /tmp directory of the remote host. 9. Change to the runtime folder in the DataView_step folder on the remote host in the /tmp directory. cd /tmp/provisoconsumer/plan/machineplan_machname/0000x_dataview_step/runtime For example, /tmp/provisoconsumer/plan/machineplan_voipdev4/ 00003_DataView_step/runtime 10. As root user, change the permissions off all files and folders in the 0000X_DataView_step folder. Use the following command: chmod -R 777 * 11. As the non-root user, that was used to install OMNIbus, run the run.sh file Use the command:./run.sh 12. After the./run.sh step is completed, source the Tivoli Integrated Portal URL and check that DataView exists. For example, must be visible in Performance > Network Resources > Defined Resource Views 13. Mark the Register DataView step as ready. 14. Run the Register DataView step. 15. Click done on the deployer 82 IBM Tivoli Netcool Performance Manager: Installation Guide

95 Example Sample dvinstall.cfg: # # Licensed Materials - Property of IBM # 5724-P55, 5724-P57, 5724-P58, 5724-P59 # Copyright IBM Corporation All Rights Reserved. # US Government Users Restricted Rights- Use, duplication or disclosure # restricted by GSA ADP Schedule Contract with IBM Corp. # # Unix user name for the TIP installation location USER_INSTALL_DIR=/opt/IBM/tivoli/tip # Unix user name for the TIP administrative user TIP_ADMINISTRATOR=tipadmin # Unix user name for the TIP administrative user TIP_ADMINISTRATOR_PASSWORD=tipadmin # location of the DB2 driver DB2_CLIENT_DIR=/opt/db2/product/ # connection url to the database TNPM_DATABASE_URL=jdbc:db2://VOIPDEV3:50000/PV # name of a valid database user (metric account: PV_LOUIS) TNPM_DATABASE_USER=PV_LOIS # password of the database user TNPM_DATABASE_USER_PASSWORD=PV # name of the context root of the web app DATAVIEW_CONTEXT=PV # If true then TIP is restarted after DataView is installed. Default is TRUE if RESTART_TIP is not set RESTART_TIP=yes Next steps The steps to perform after deployment. The next step is to install the technology packs, as described in Technology Pack Installation and Configuration Guide. After you have created the topology and installed Tivoli Netcool Performance Manager, it is easy to change to the environment. Oopen the deployed topology file (loading it from the database), make your changes, and run the deployer with the updated topology file as input. For more information about performing incremental installations, see Chapter 6, Modifying the current deployment, on page 93. Note: After your initial deployment, always load the topology file from the database to make any additional changes (such as adding or removing a component), because it reflects the status of your environment. After you have made your changes, you must deploy the updated topology so that it is propagated to the database. To make any subsequent changes after the deployment, you must load the topology file from the database again. Chapter 4. Installing in a distributed environment 83

96 To improve performance, IBM recommends that you regularly compute the statistics on metadata tables. You can compute these statistics by creating a cron entry that runs the dbmgr (Database Manager Utility) analyzemetadatatables command at intervals. The following example shows a cron entry that checks statistics every hour at 30 minutes past the hour. The ForceCollection option is set to N, so that statistics will be calculated when the internal calendar determines that it is necessary, and not every hour: 05***[-f/opt/DM/dataMart.env ] && [ -x /opt/dm/bin/dbmgr ] &&. /opt/dm/datamart.env && dbmgr analyzemetadatatables A N For more information about the dbmgr and the analyzemetadatatables command, see the Tivoli Netcool Performance Manager IBM Tivoli Netcool Performance Manager: Database Administration Guide. For each new SNMP DataLoad, change the env file of the Tivoli Netcool Performance Manager user to add the directory with the OpenSSH libcrypto.so to the LD_LIBRARY_PATH (or LIBPATH). Resuming a partially successful first-time installation If you quit your installation that is progress for any reason, this section describes how you can resume the installation process. About this task In this scenario, you try to deploy a Tivoli Netcool Performance Manager topology for the first time. You define the topology and start the installation. Although some of the components of the Tivoli Netcool Performance Manager topology are installed successfully, the overall installation does not complete successfully. It addition, it is possible to skip a section of the installation. For example, a remote node might not be accessible for some reason. After you skip this portion of the installation, resume the installation to continue with the remaining steps. The deployer will list only those steps that are needed to complete the installation on the missing node. For example, during the first installation, DB2 was not running, so the database check failed. Stop the installation, start DB2, and then resume the installation. Procedure To resume a partial installation, follow these steps: 1. After you correct the problem, restart the deployer from the command line by using the following command:./deployer.bin -Daction=resume Use the resume switch to enable you to resume the installation exactly where you left off. Note: If you are asked to select a topology file to resume your installation, select the topology file that you saved before you begin the installation. 2. The deployer opens, displaying a welcome page. Click Next to continue. 84 IBM Tivoli Netcool Performance Manager: Installation Guide

97 3. Accept the default location of the base installation directory of the DB2 JDBC driver (/opt/db2/product/10.1.0). Or, click Choose to go to another directory. 4. Click Next to continue. 5. The steps page shows the installation steps in the same state they were in when you stopped the installation (with the completed steps marked Success, the failed step marked Error, and the remaining steps marked Held). 6. Select the step that previously failed, reset it to Ready, and then click Run Next. Verify that this installation step now completes successfully. 7. Run the remaining installation steps, verifying that they complete successfully. At the end of the installation, the deployer loads the updated topology information into the database. Chapter 4. Installing in a distributed environment 85

98 86 IBM Tivoli Netcool Performance Manager: Installation Guide

99 Chapter 5. Installing as a minimal deployment Describes how to install Tivoli Netcool Performance Manager as a minimal deployment. Overview A minimal deployment installation is used primarily for demonstration or evaluation purposes, and installs the product on the smallest number of machines possible, with minimal user input. Before you begin This installation type installs all the Tivoli Netcool Performance Manager components on the local host using a predefined topology file to define the infrastructure. The minimal deployment installation does not install the MIB-II Technology Pack. You must manually install the technology pack after you complete this installation. When you perform a minimal deployment installation, the Tivoli Netcool Performance Manager components are installed on the server you are running the deployer from. Before installing Tivoli Netcool Performance Manager, you must have installed the prerequisite software. For detailed information, see Chapter 3, Installing and configuring the prerequisite software, on page 29. Note: Before you start the installation, verify that all the database tests have been performed. Otherwise, the installation might fail. See Chapter 3, Installing and configuring the prerequisite software, on page 29 Minimal Installation Process: If you are setting up a demonstration or evaluation system, it is possible to install all Tivoli Netcool Performance Manager components on a single server for Linux systems. In this case your installation process is as follows: Copyright IBM Corp. 2006,

100 Special consideration By default, Tivoli Netcool Performance Manager uses Monday to determine when a new week begins. If you wish to specify a different day, you must change the FIRST_WEEK_DAY parameter in the Database Registry using the dbregedit utility. This parameter can only be changed when you first deploy the topology that installs your Tivoli Netcool Performance Manager environment, and it must be changed before the Database Channel is installed. For more information, see the IBM Tivoli Netcool Performance Manager: Database Administration Guide. Resume of partial install is unavailable There is no resume functionality available to a minimal deployment installation. As a result a minimal deployment installation must be carried out in full if it is to be attempted. Overriding default values When performing a minimal deployment installation you must accept all default values. The exceptions are listed in this section. The expectations to this are: v The location of the DB2 jdbc driver. The default is /opt/db2/product/10.1.0/java v The Tivoli Netcool Performance Manager installation destination folder. The default is /opt/proviso v DB2 server parameters. The defaults are: DB2 Base:/opt/db2 Database Home:/opt/db2/product/ DB2 Port: IBM Tivoli Netcool Performance Manager: Installation Guide

101 Installing a minimal deployment Provides step-by-step instructions for installing Tivoli Netcool Performance Manager on a single Linux server. Starting the launchpad The steps required to start the launchpad. Procedure To start the launchpad, follow these steps: 1. Log in as root. 2. Set and export the DISPLAY variable. See Setting up a remote X Window display on page Set and export the BROWSER variable to point to your Web browser. For example: On Linux systems: # BROWSER=/usr/bin/firefox # export BROWSER Note: The BROWSER command cannot include any spaces around the equal sign. 4. Change directory to the directory where the launchpad resides. On Linux systems: #cd<dist_dir>/proviso/rhel <DIST_DIR> is the directory on the hard drive where you copied the contents of the Tivoli Netcool Performance Manager distribution. Form more information see Downloading the Tivoli Netcool Performance Manager distribution to disk on page Enter the following command to start the launchpad: #./launchpad.sh Starting the installation Steps that are required to install. About this task A minimal deployment installation uses a predefined topology file. Procedure To start the installation, follow these steps: 1. On the launchpad, click the Install Tivoli Netcool Performance Manager for Minimal Deployment option in the list of tasks, and then click the Install Tivoli Netcool Performance Manager for Minimal Deployment link to start the deployer. Alternatively, you can start the deployer from the command line, as follows: a. Log in as root. b. Set and export your DISPLAY variable (see Setting up a remote X Window display on page 30). Chapter 5. Installing as a minimal deployment 89

102 c. Change directory to the directory that contains the deployer: On Linux systems: # cd <DIST_DIR>/proviso/RHEL/Install/deployer d. Enter the following command: #./deployer.bin -Daction=poc -DPrimary=true 2. The deployer opens, displaying a welcome page. Click Next to continue. 3. Accept the terms of the license agreement, and then click Next. 4. Accept the default location of the base installation directory of the DB2 JDBC driver (/opt/db2/product/10.1.0/java), or click Choose to go to another directory. Click Next to continue. 5. The deployer prompts for the directory in which to install Tivoli Netcool Performance Manager. Accept the default value (/opt/proviso) or click Choose to go to another directory. Click Next to continue. 6. Verify the following extra information about the DB2 database: v DB2 Base: The base directory for the DB2 installation (for example, /opt/db2). Accept the provided path or click Choose to go to another directory. v Database Home: The root directory of the DB2 database (for example, /opt/db2/product/10.1.0). Accept the provided path or click Choose to go to another directory. v DB2 Port: The port that is used for DB2 communications. The default value is Click Next to continue. 7. The node selection window shows the target system and how the files are transferred. These settings are ignored for a minimal deployment installation because all the components are installed on a single server. Click Next to continue. 8. Provide media location details. The Tivoli Netcool Performance Manager Media Location for components window is displayed, listing component and component platform. a. Click the Choose the Proviso Media button. You will be asked to provide location of the media for each component. b. Enter the base directory in which your media is located. If any of the component media is not within the directory that is specified, you will be asked to provide media location detail for that component. 9. The deployer displays summary information about the installation. Review the information, and then click Next to begin the installation. The deployer displays the table of installation steps (see Pre-deployment check on page 76 for an overview of the steps table). Note the following: v If an installation step fails, see Appendix I, Error codes and log files, on page 187 for debugging information. Continue the installation by following the instructions in Resuming a partially successful first-time installation on page 84 v Some of the installation steps can take a long time to complete. However, if an installation step fails, it will fail in a short amount of time. 10. Click Run All to run all the steps in sequence. 11. When all the steps have completed successfully, click Done to close the wizard. 90 IBM Tivoli Netcool Performance Manager: Installation Guide

103 Results The post-installation script Next steps Your installation is complete. For information about the post-installation script, see The post-installation script, or Next steps for what to do next. The post-installation script is run automatically when installation is complete. About this task For a minimal deployment, the script performs four actions: Procedure 1. Starts the DataChannel. 2. Starts the DataLoad SNMP Collector, if it is not already running. 3. Creates a DataView user named tnpm. 4. Gives the poc user permission to view reports under the NOC Reporting group, with the default password of tnpm. Results The script writes a detailed log to the file /var/tmp/poc-postinstall.${timestamp}.log. The steps to be performed following the deployment of your system. When the installation is complete, you are ready to perform the final configuration tasks that enable you to view reports on the health of your network. These steps are documented in detail in the Tivoli Netcool Performance Manager documentation set. For each new SNMP DataLoad, change the env file of the Tivoli Netcool Performance Manager user to add the directory with the OpenSSH libcrypto.so to the LD_LIBRARY_PATH (or LIBPATH). Downloading the MIB-II files The minimal deployment version does not install the MIB-II Technology Pack. About this task Before you begin the manual installation of the technology pack, you must download both the Technology Pack Installer and the MIB-II JAR files. To download these files, access either of the following distributions: Procedure v The product distribution site: softwareandservices Locate on the product distribution site are the ProvisoPackInstaller.jar file, the bundled JAR file, and individual stand-alone technology pack JAR files. Chapter 5. Installing as a minimal deployment 91

104 v Optional: The Tivoli Netcool Performance Manager CD distribution, which contains the ProvisoPackInstaller.jar file and the JAR files for the Starter Kit components. See your IBM customer representative for more information about obtaining software. Note: Technology Pack Installer and the MIB-II JAR files must be in the same directory (for example, AP), and no other application JAR files must be present. If there are any more JAR files in that folder, the installation step fails due to too many JAR files in the specified folder. In addition, you must add the AP directory to the Tivoli Netcool Performance Manager distribution's directory structure. What to do next For more information about MIB-II Technology Pack, see the MIB-II Technology Pack Reference. For more information about installing the technology Pack, see Pack Installation and Configuration Guide. 92 IBM Tivoli Netcool Performance Manager: Installation Guide

105 Chapter 6. Modifying the current deployment Opening a deployed topology Describes how to modify an installation of Tivoli Netcool Performance Manager. It is possible to modify Tivoli Netcool Performance Manager after it has been installed. To add, delete or upgrade components, load the deployed topology from the database, make your changes, and run the deployer with the updated topology as input. Note: You must run the updated topology through the deployer in order for your changes to take effect. Note the following: v After your initial deployment, always load the topology from the database to make any additional changes (such as adding or removing a component), because it reflects the current status of your environment. Once you have made your changes, you must deploy the updated topology so that it is propagated to the database. To make any subsequent changes following this deployment, you must load the topology from the database again. v You might have a situation where you have modified a topology by both adding new components and removing components (marking them "To Be Removed"). However, the deployer can work in only one mode at a time - installation mode or uninstallation mode. In this situation, first run the deployer in uninstallation mode, then run it again in installation mode. For information about deleting components from an existing topology, see Removing a component from the topology on page 131. After you have installed Tivoli Netcool Performance Manager, you can perform incremental installations by modifying the topology that is stored in the database. About this task You can retrieve the topology, modify it, then pass the updated data to the deployer. When the installation is complete, the deployer stores the revised topology data in the database. Procedure To open a deployed topology, follow these steps: 1. If it is not already open, open the Topology Editor (see Starting the Topology Editor on page 59). 2. In the Topology Editor, select Topology > Open existing topology. The Open Topology window is displayed. 3. For the topology source, select and click Next. 4. Verify that all of the fields for the database connection are entered with correct values: v Database hostname - The name of the database host. The default value is localhost. Copyright IBM Corp. 2006,

106 v Port - The port number that is used for communication with the database. The default value is v Database user - The user name that is used to access the database. The default value is PV_INSTALL. v Database Password - The password for the database user account. For example, PV. v DB Name - The name of database. The default value is PV. If wanted, click Save as defaults to save these values for future incremental installations. 5. Click Finish. Results Adding a new component The topology is retrieved from the database and is displayed in the Topology Editor. After you have deployed your topology, you might need to make changes to it. About this task For example, you might want to add another SNMP collector. Procedure To add a new component to the topology, follow these steps: 1. If it is not already open, open the Topology Editor (see Starting the Topology Editor on page 59). 2. Open the existing topology (see Opening a deployed topology on page 93). 3. In the Logical view of the Topology Editor, right-click the folder for the component you want to add. 4. Select Add XXX from the pop-up menu, where XXX is the name of the component you want to add. 5. The Topology Editor prompts for whatever information is needed to create the component. See the appropriate section for the component you want to add: v Adding the hosts on page 60 v Adding a database configurations component on page 62 v Adding a DataMart on page 63 v Adding a Discovery Server on page 64 v Adding a Tivoli Integrated Portal on page 65 v Adding a DataView on page 67 v Add the DataChannel administrative components on page 67 v Adding a DataChannel on page 68 v Adding a collector on page 71 Note: that if you add a collector to a topology that has already been deployed, you must manually bounce the DataChannel management components (cnsw, logw, cmgrw, amgrw). For more information, see Manually starting the Channel Manager programs on page 148. v Adding a Discovery Server on page IBM Tivoli Netcool Performance Manager: Installation Guide

107 6. The new component is displayed in the Logical view of the Topology Editor. 7. Save the updated topology. You must save the topology after you add the component and before you run the deployer. This step is not optional. 8. Run the deployer (see Starting the deployer on page 75), passing the updated topology as input. The deployer can determine that most of the components described in the topology are already installed, and installs only the new component. 9. When the installation ends successfully, the deployer uploads the updated topology into the database. For information about removing a component from the Tivoli Netcool Performance Manager environment, see Removing a component from the topology on page 131. Example In this example, you update the installed version of Tivoli Netcool Performance Manager to add a new DataChannel and two SNMP DataLoaders to the existing system. To update the Tivoli Netcool Performance Manager installation: 1. If it is not already open, open the Topology Editor (see Starting the Topology Editor on page 59). 2. Open the existing topology (see Opening a deployed topology on page 93). 3. In the Logical view of the Topology Editor, right-click the DataChannel folder. 4. Select Add Data Channel from the pop-up menu. Following the directions in Adding a DataChannel on page 68, add the following components: a. Add a DataChannel (Data Channel 2) with two different SNMP DataLoaders to the topology. The Topology Editor creates the new DataChannel. b. Add two SNMP collectors to the channel structure created by the Topology Editor. The editor automatically creates a Daily Loader component, an Hourly Loader component, and two Sub Channels with an FTE component and a CME component. 5. Save the updated topology. 6. Run the deployer (see Starting the deployer on page 75), passing the updated topology as input. The deployer can determine that most of the components described in the topology are already installed, and installs only the new components (in the example, DataChannel 5 with two new subchannels and DataLoaders). 7. When the installation ends, successful or not, the deployer uploads the updated topology into the database. Chapter 6. Modifying the current deployment 95

108 Changing configuration parameters of existing Tivoli Netcool Performance Manager components Configuration information is stored in the database. This enables the DataChannel-related components to retrieve the configuration from the database at run time. You set the configuration information using the Topology Editor. As with the other components, if you make changes to the configuration values, you must pass the updated topology data to the deployer to have the changes propagated to both the environment and the database. Note: After the updated configuration has been stored in the database, you must manually start, stop, or bounce the affected DataChannel component to have your changes take effect. Moving components to a different host You can use the Topology Editor to move components between hosts. About this task You can move all components between hosts when they have not yet been installed and are in the configured state. You can move SNMP and UBA collectors when they are in the configured state or after they have been deployed and are in the installed state. If the component in the topology has not yet been deployed and is in the configured state, the Topology Editor provides a Change Host option in the pop-up menu when you click the component name in the Logical view. This option allows you to change the host associated with the component prior to deployment. If the component is an SNMP or UBA collector that was previously deployed and is in the installed state, the Topology Editor provides a Migrate option in the pop-up menu. This option instructs the deployer to uninstall the component from the previous host and re-install it on the new system. For instructions on moving deployed SNMP and UBA collectors after deployment, see Moving a deployed collector to a different host on page 97. For instructions on moving components that have not yet been deployed, see the information below. Note: The Movement of installed DataChannel Remote components is not supported. All other components can be moved. To change the host associated with a component before deployment: Procedure 1. Start the Topology Editor (if it is not already running) and open the topology that includes the component's current host (see Starting the Topology Editor on page 59 and Opening a deployed topology on page 93). 2. In the Logical view, navigate to the name of the component to move. 3. Right-click the component name, and then click Change Host from the pop-up menu. 96 IBM Tivoli Netcool Performance Manager: Installation Guide

109 The Migrate Component dialog appears, containing a drop-down list of hosts where you can move the component. 4. Select the name of the new host from the list, then click Finish. The name of the new host appears in the Properties tab. Moving a deployed collector to a different host You can move a deployed SNMP or UBA collector to a different host. The instructions for doing so differ for SNMP collectors and UBA collectors. After you move a collector to a new host, it may take up to an hour for the change to be registered in the database. Moving a deployed SNMP collector The steps required to move a deployed SNMP collector to a different host About this task Note: To avoid the loss of collected data, leave the collector running on the original host until you complete Step 7 on page 108. Procedure 1. Start the Topology Editor (if it is not already running) and open the topology that includes the collector's current host (see Starting the Topology Editor on page 59 and Opening a deployed topology on page 93). 2. In the Logical view, navigate to the name of the collector to move. For example if moving SNMP 1.1, navigate as follows: DataChannels > DataChannel 1 > Collector 1.1 > Collector SNMP Right-click the collector name (for example, Collector SNMP 1.1), then click Migrate from the pop-up menu. The Migrate Collector dialog appears, containing a drop-down list of hosts where you can move the collector. Note: If you are moving a collector that has not been deployed, select Change host from the pop-up menu (Migrate is grayed out). After the Migrate Collector dialog appears, continue with the steps below. 4. Select the name of the new host from the list, then click Finish. In the Physical view, the status of the collector on the new host is Configured. The status of the collector on the original host is To be uninstalled. You will remove the collector from the original host in Step 9. Note: If you are migrating a collector that has not been deployed, the name of the original host is automatically removed from the Physical view. 5. Click Topology > Save Topology to save the topology data. 6. Click Run > Run Deployer for Installation to run the deployer, passing the updated topology as input. For more information on running the deployer, see Starting the deployer on page 75. The deployer installs the collector on the new host and starts it. Note: Both collectors are now collecting data - the original collector on the original host, and the new collector on the new host. Chapter 6. Modifying the current deployment 97

110 7. Before continuing with the steps below, note the current time, and wait until a time period equivalent to two of the collector's collection periods elapses. Doing so guards against data loss between collections on the original host and the start of collections on the new host. Because data collection on the new host is likely to begin sometime after the first collection period begins, the data collected during the first collection period will likely be incomplete. By waiting for two collection time periods to elapse, you can be confident that data for one full collection period will be collected. The default collection period is 15 minutes. You can find the collection period for the subelement, subelement group, or collection formula associated with the collector in the DataMart Request Editor. For information on viewing and setting a collection period, see the IBM Tivoli Netcool Performance Manager: DataMart Configuration and Operation Guide. 8. Bounce the FTE for the collector on the collector's new host, as in the following example:./dccmd bounce FTE.1.1 The FTE now recognizes the collector's configuration on the new host, and will begin retrieving data from the collector's output directory on the new host. 9. In the current Topology Editor session, click Run > Run Deployer for Uninstallation to remove the collector from the original host, passing the updated topology as input. For more information, see Removing a component from the topology on page 131. Note: This step is not necessary if you are moving a collector that has not been deployed. Moving a deployed SNMP collector to or from a HAM environment If you move a deployed SNMP collector into or out of a High Availability Manager (HAM) environment, you must perform the steps in this section. Procedure 1. Move the collector as described in Moving a deployed SNMP collector on page 97. Note: If you are moving a spare collector out of the HAM environment, the navigation path is different than the path shown in Step 2 of the above instructions. For example, you have a single HAM environment with a cluster MyCluster on host MyHost, and you are moving the second SNMP spare out of the HAM. The navigation path to the spare would be as follows: DataChannels > Administrative Components > High Availability Managers > HAM MyServer.1 > MyCluster > Collector Processes > Collection Process SNMP Spare Log in as Tivoli Netcool Performance Manager UNIX user, pvuser on the collector's new host. 3. Change to the directory where DataLoad is installed. For example: cd /opt/dataload 4. Source the DataLoad environment:../dataload.env 5. Stop the SNMP collector: pvmdmgr stop 98 IBM Tivoli Netcool Performance Manager: Installation Guide

111 6. Edit the file dataload.env and set the field DL_HA_MODE as follows: v Set DL_HA_MODE=true if you moved the collector to a HAM host. v Set DL_HA_MODE=false if you moved the collector from a HAM host. 7. Source the DataLoad environment again:../dataload.env 8. Start the SNMP collector: pvmdmgr start Note: If you move an SNMP collector to or from a HAM host, you must bounce the HAM. For more information, see Stopping and restarting modified components on page 125. Moving a deployed UBA bulk collector The steps required to move a deployed UBA collector to a different host. About this task Note: You cannot move BCOL collectors, or UBA collectors that have a BLB or QCIF subcomponent. If you want to move a UBA collector that has these subcomponents, you must manually remove it from the old host in the topology and then add it to the new host. Procedure 1. Log in as pvuser to the DataChannel host where the UBA collector is running. 2. Change to the directory where DataChannel is installed. For example: cd /opt/datachannel 3. Source the DataChannel environment:. datachannel.env 4. Stop the collector's UBA and FTE components. For example, to stop these components for UBA collector 1.1, run the following commands: dccmd stop UBA.1.1 dccmd stop FTE.1.1 For information on the dccmd command, see the Tivoli Netcool Performance Manager: Command Line Interface Guide. Note: Some technology packs have additional pack-specific components that must be shut down - namely, BLB (bulk load balancer) and IF (inventory file) components. IF component names have the format xxxif, where xxx is a pack-specific name. For example, Cisco CWM Technology Pack has a CWMIF component, Alcatel-Lucent 5620 SAM Technology Pack has a SAMIF component, and Alcatel-Lucent 5620 NM Technology Pack has a QCIF component. Other packs do not use these technology-specific components. 5. Tar up the UBA collector's UBA directory. You will copy this directory to the collector's new host later in the procedure (Step 13). For example, to tar up a UBA directory for UBA collector 1.1, run the following command: Note: This step is not necessary if the collector's current host and the new host share a file system. tar -cvf UBA_1_1.tar./UBA.1.1/* Chapter 6. Modifying the current deployment 99

112 Note: Some technology packs have additional pack-specific directories that need to be moved. These directories have the same names as the corresponding pack-specific components described in Step Start the Topology Editor (if it is not already running) and open the topology that includes the collector's current host (see Starting the Topology Editor on page 59 and Opening a deployed topology on page 93). 7. In the Logical view, navigate to the name of the collector to move - for example, Collector UBA Right-click the collector name and select Migrate from the pop-up menu. The Migrate Collector dialog appears, containing a drop-down list of hosts where you can move the collector. 9. Select the name of the new host from the list, then click Finish. In the Physical view, the status of the collector on the new host is Configured. The collector is no longer listed under the original host. Note: If the UBA collector was the only DataChannel component on the original host, the collector will be listed under that host, and its status will be "To be uninstalled." You can remove the DataChannel installation from the original host after you finish the steps below. For information on removing DataChannel from the host, see Removing a component from the topology on page Click Topology > Save Topology to save the topology. 11. Click Run > Run Deployer for Installation to run the deployer, passing the updated topology as input. For more information on running the deployer, see Starting the deployer on page 75. If DataChannel is not already installed on the new host, this step installs it. 12. Click Run > Run Deployer for Uninstallation to remove the collector from the original host, passing the updated topology as input. For more information, see Removing a component from the topology on page Copy any directory you tarred in Step 5 and the associated JavaScript files to the new host. Note: This step is not necessary if the collector's original host and the new host share a file system. For example, to copy UBA_1_1.tar and the JavaScript files from the collector's original host: a. Log in as pvuser to the UBA collector's new host. b. Change to the directory where DataChannel is installed. For example: cd /opt/datachannel c. FTP to the collector's original host. d. Run the following commands to copy the tar file to the new host. For example: cd /opt/datachannel get UBA_1_1.tar bye tar -xvf UBA_1_1.tar e. Change to the directory where the JavaScript files for the technology pack associated with the collector are located: cd /opt/datachannel/scripts 100 IBM Tivoli Netcool Performance Manager: Installation Guide

113 f. FTP the JavaScript files from the /opt/datachannel/scripts directory on the original host to the /opt/datachannel/scripts directory on the new host. 14. Log in as pvuser to the Channel Manager host where the Administrator Components (including CMGR) are running. 15. Stop and restart the Channel Manager by performing the following steps: a. Change to the $DC_HOME directory (typically, /opt/datachannel). b. Source the DataChannel environment:. datachannel.env c. Get the CMGR process ID by running the following command: ps -ef grep CMGR The process ID appears in the output immediately after the user ID, as shown below in bold: pvuser Aug 21? 3:04 /opt/datachannel/bin/cmgr_visualn/dc.im -a CMGRpvuser :39:38 pts/7 0:00 grep CMGR d. Stop the CMGR process. For example, if 6561 is the CMGR process ID: kill e. Change to the $DC_HOME/bin directory (typically, /opt/datachannel/bin). f. Restart CMGR by running the following command:./cmgrw 16. Log in as pvuser to the UBA collector's new host and change to the $DC_HOME/bin directory (typically, /opt/datachannel/bin). 17. Run the following command to verify that Application Manager (AMGR) is running on the new host:./findvisual If the AMGR process is running, you will see output that includes an entry like the following: pvuser Aug 21? 3:43 /opt/datachannel/bin/amgr_visual -nologo /opt/datachannel/bin/dc.im -a AMGR -lo Note: If AMGR is not running on the new host, do not continue. Verify that you have performed the preceding steps correctly. 18. Start the collector's UBA and FTE components on the new host. For example, to start these components for collector 1.1, run the following commands:./dccmd start UBA.1.1./dccmd start FTE.1.1 Changing the port for a collector Note: If any pack-specific components were shut down on the old host (see Step 4), you must also start those components on the new host. You can use the Topology Editor to change the port associated with a collector. About this task To change the port associated with a collector after deployment: Chapter 6. Modifying the current deployment 101

114 Procedure 1. Start the Topology Editor (if it is not already running) and open the topology (see Starting the Topology Editor on page 59 and Opening a deployed topology on page 93). 2. In the Logical view, navigate to the collector. 3. Highlight the collector to view its properties. The Topology Editor displays both the collector core parameters and the technology pack-specific parameters. 4. Edit the port parameter within the list, SERVICE_PORT, and then click Finish. 5. Click Topology > Save Topology to save the topology data. 6. Click Run > Run Deployer for Installation to run the deployer, passing the updated topology as input. 7. When deployment is complete, log in to the server hosting the collector. 8. Log in as Tivoli Netcool Performance Manager UNIX user, pvuser on the collector's host. 9. Change to the directory where DataLoad is installed. For example: cd /opt/dataload 10. Source the DataLoad environment:../dataload.env 11. Stop the SNMP collector: pvmdmgr stop 12. Edit the file dataload.env and set the field DL_ADMIN_TCP_PORT. For example: DL_ADMIN_TCP_PORT= Source the DataLoad environment again:../dataload.env 14. Start the SNMP collector: pvmdmgr start Modifying Tivoli Integrated Portal and Tivoli Common Reporting ports You can update the ports used by Tivoli Integrated Portal and Tivoli Common Reporting. The Tivoli Integrated Portal specific ports defined and used to build the topology.xml file are as follows: WAS_WC_defaulthost COGNOS_CONTENT_DATABASE_PORT 1557 IAGLOBAL_LDAP_PORT 389 Changing ports for the Tivoli Common Reporting console You can assign new ports to an installed Tivoli Common Reporting console. Procedure 1. Create a properties file containing values such as host name that match your environment. The exemplary properties file below uses default values. Modify the values to match your environment. Save the file in any location. WAS_HOME=C:/ibm/tivoli/tip22 was.install.root=c:/ibm/tivoli/tip22 profilename=tipprofile profilepath=c:/ibm/tivoli/tipv2/profiles/tipprofile 102 IBM Tivoli Netcool Performance Manager: Installation Guide

115 templatepath=c:/ibm/tivoli/tipv2/profiletemplates/default nodename=tipnode cellname=tipcell hostname=your_tcr_host portsfile=c:/ibm/tivoli/tipv2/properties/tipportdef.properties 2. Edit the TCR_install_dir\properties\TIPPortDef.properties file to contain the desired port numbers. 3. Stop the Tivoli Common Reporting server by navigating to the following directory in the command-line interface: v command. TCR_component_dir\bin, and running the stoptcrserver.bat v and TCR_component_dir/bin, and running the stoptcrserver.sh. Important: To stop the server, you must log in with the same user that you used to install Tivoli Common Reporting. 4. In the command-line interface, navigate to the TCR_install_dir\bin directory. 5. Run the following command: ws_ant.bat -propertyfile C:\temp\tcrwas.props -file "C:\IBM\tivoli\tipv2\profileTemplates\default\actions\ updateports.ant" C:\temp\tcrwas.props is the path to the properties file created in Step Change the port numbers in IBMCognos Configuration: a. Open IBMCognos Configuration by running TCR_component_dir\cognos\ bin\tcr_cogconfig.bat for Windows operating systems and TCR_install_dir/cognos/bin/tcr_cogconfig.sh for Linux and UNIX. b. In the Environment section, change the port numbers to the desired values, as in Step 2. c. Save your settings and close IBMCognos Configuration. 7. Start the Tivoli Common Reporting server by navigating to the following directory in the command-line interface: v command. TCR_component_dir\bin, and running the starttcrserver.bat v and TCR_component_dir/bin, and running the starttcrserver.sh. Important: To start the server, you must log in with the same user that you used to install Tivoli Common Reporting. Port assignments The application server requires a set of sequentially numbered ports. The sequence of ports is supplied during installation in the response file. The installer checks that the number of required ports (starting with the initial port value) are available before assigning them. If one of the ports in the sequence is already in use, the installer automatically terminates the installation process and you must specify a different range of ports in the response file. Chapter 6. Modifying the current deployment 103

116 Viewing the application server profile Open the application server profile to review the port number assignments and other information. About this task The profile of the application server is available as a text file on the computer where it is installed. Procedure 1. Locate the /opt/ibm/tivoli/tipv2/profiles/tipprofile/logs directory. 2. Open AboutThisProfile.txt in a text editor. Example This is the profile for an installation on in a Windows environment as it appears in /opt/ibm/tivoli/tipv2\profiles\tipprofile\logs\aboutthisprofile.txt: Application server environment to create: Application server Location: /opt/ibm/tivoli/tcr\profiles\tipprofile Disk space required: 200 MB Profile name: TIPProfile Make this profile the default: True Node name: TIPNode Host name: tivoliadmin.usca.ibm.com Enable administrative security (recommended): True Administrative consoleport: Administrative console secure port: HTTP transport port: HTTPS transport port: Bootstrap port: SOAP connector port: Run application server as a service: False Create a Web server definition: False What to do next If you want to see the complete list of defined ports on the application server, you can open /opt/ibm/tivoli/tipv2/properties/tipportdef.properties in a text editor: #Create the required WAS port properties for TIP #Mon Oct 06 09:26:30 PDT 2008 CSIV2_SSL_SERVERAUTH_LISTENER_ADDRESS=16323 WC_adminhost=16315 DCS_UNICAST_ADDRESS=16318 BOOTSTRAP_ADDRESS=16312 SAS_SSL_SERVERAUTH_LISTENER_ADDRESS=16321 SOAP_CONNECTOR_ADDRESS=16313 ORB_LISTENER_ADDRESS=16320 WC_defaulthost_secure=16311 CSIV2_SSL_MUTUALAUTH_LISTENER_ADDRESS=16322 WC_defaulthost=16310 WC_adminhost_secure= IBM Tivoli Netcool Performance Manager: Installation Guide

117 Chapter 7. Using the High Availability Manager This chapter describes the optional Tivoli Netcool Performance Manager High Availability Manager (HAM), including how to set up a HAM environment. Overview The High Availability Manager (HAM) is an optional component for large installations that want to use redundant SNMP collection paths. The HAM constantly monitors the availability of one or more SNMP collection hosts, and switches collection to a backup host (called a spare) if a primary host becomes unavailable. The following figure shows a simple HAM configuration with one primary host and one spare. In the panel on the left, the primary host is operating normally. SNMP data is being collected from the network and channeled to the primary host. In the panel on the right, the HAM has detected that the primary host is unavailable, so it dynamically unbinds the collection path from the primary host and binds it to the spare. HAM basics An SNMP collector collects data from a specific set of network resources according to a set of configuration properties. A collector has two basic parts: the collector process running on the host computer, and the collector profile that defines the collector's properties. Note: Do not confuse a collector profile with an inventory profile. A collector profile contains properties used in the collection of data from network resources - properties such as collector number, polling interval, and output directory for the collected data. An inventory profile contains information used to discover network resources - properties such as the addresses of the resources to look for and the mode of discovery. A collector that is not part of a HAM environment is static - that is, the collector process and the collector profile are inseparable. But in a HAM environment, the Copyright IBM Corp. 2006,

118 collector process and collector profile are managed as separate entities. This means that if a collector process is unavailable (due to a collector process crash or a host machine outage), the HAM can dynamically re-configure the collector, allowing data collection to continue. The HAM does so by unbinding the collector profile from the unavailable collector process on the primary host, and then binding the collector profile to a collector process on a backup (spare) host. Note: It may take several minutes for the HAM to re-configure a collector, depending on the amount of data being collected. The parts of a collector Collector parts and their description. When you set up a HAM configuration in the Topology Editor, you manage the two parts of a collector - the collector process and the collector profile - through the following folders in the Logical view: Collector Processes A collector process is a Unix process representing a runtime instance of a collector. A collector process is identified by the name of the host where the process is running and by the collector process port (typically 3002). A host can have just one SNMP collector process. Managed Definitions A managed definition identifies a collector profile through the unique collector number defined in the profile. Every managed definition has a default binding to a host and to the collector process on that host. The default host and collector process are called the managed definition's primary host and collector process. A host that you designate as a spare host has a collector process but no default managed definition. The following figure shows the parts of a collector that you manage through the Collector Process and Managed Definition folders. In the figure, the HAM dynamically unbinds the collector profile from the collector process on the primary host, and then binds the profile to the collector process on the spare. This dynamic re-binding of the collector is accomplished when the HAM binds the managed definition - in this case, represented by the unique collector ID, Collector 1- to the collector process on the spare. 106 IBM Tivoli Netcool Performance Manager: Installation Guide

119 Clusters A HAM environment can consist of a single set of hosts or multiple sets of hosts. Each set of hosts in a HAM environment is called a cluster. HAM cluster configuration A cluster is a logical grouping of hosts and collector processes that are managed by a HAM. The use of multiple clusters is optional. Whether you use multiple clusters or just one has no affect on the operation of the HAM. Clusters simply give you a way to separate one group of collectors from another, so that you can better deploy and manage your primary and spare collectors in a way that is appropriate for your needs. Multiple clusters may be useful if you have a large number of SNMP collector hosts to manage, or if the hosts are located in various geographic areas. The clusters in a given HAM environment are distinct from one another. In other words, the HAM cannot bind a managed definition in one cluster to a collector process in another. For host failover to occur, a HAM cluster must have at least one available spare host. The cluster can have as few as two hosts - one primary and one spare. Or, it can have multiple primary hosts with one or more spares ready to replace primary hosts that become unavailable. The ratio of primary hosts to spare hosts is expressed as p+s. For example, a HAM cluster with four primary hosts and two spares is referred to as a 4+2 cluster. Types of spare hosts There are two types of spare hosts: Designated spare The sole purpose of this type of spare in a HAM cluster is to act as a backup host. A designated spare has a collector process, but no default managed definition. Its collector process remains idle until the HAM detects an outage on one of the active hosts, and binds that host's managed definition to the spare's collector process. A HAM cluster must have at least one designated spare. Floating spare This type of spare is a primary host that can also act as a backup host for one or more managed definitions. Chapter 7. Using the High Availability Manager 107

120 Types of HAM clusters The types of HAM clusters that can be created. When the HAM binds a managed definition to a spare (either a designated spare or a floating spare), the spare becomes an active component of the collector. It remains so unless you explicitly reassign the managed definition back to its primary host or to another available host in the HAM cluster. This is an important fact to consider when you plan the hosts to include in a HAM cluster. There are two types of HAM clusters: Fixed spare cluster In this type of cluster, failover can occur only to designated spares. There are no floating spares in this type of cluster. When the HAM binds a managed definition to the spare, the spare temporarily takes the place of the primary that has become unavailable. When the primary becomes available again, you must reassign the managed definition back to the primary (or to another available host). The primary then resumes its data collection operations, and the spare resumes its role as backup host. If you do not reassign the managed definition back to the primary, the primary cannot participate in further collection operations. Since the primary is not configured as a floating spare, it also cannot act as a spare now that its collector process is idle. As a result, the HAM cluster loses its failover capabilities if no other spare is available. Note: A primary host cannot act as a spare unless it is configured as a floating spare. Floating spare cluster This type of cluster has one or more primary hosts that can also act as a spare. Failover can occur to a floating spare or to a designated spare. You do not need to reassign the managed definition back to this type of primary, as you do with primaries in a fixed spare cluster. When a floating spare primary becomes available again, it assumes the role of a spare. You can designate some or all of the primaries in a HAM cluster as floating spares. If all the primaries in a HAM cluster are floating spares, you should never have to reassign a managed definition to another available host in order to maintain failover capability. Note: IBM recommends that all the primaries in a cluster be of the same type - either all floating spares or no floating spares. 108 IBM Tivoli Netcool Performance Manager: Installation Guide

121 Example HAM clusters Examples of HAM cluster options. The Tivoli Netcool Performance Manager High Availability Manager feature is designed to provide great flexibility in setting up a HAM cluster. The following illustrations show just a few of the possible variations , fixed spare A fixed spare cluster with one primary host and one designated spare. The figure below shows a fixed spare cluster with one primary host and one designated spare: v v v In the panel on the left, Primary1 is functioning normally. The designated spare is idle. In the panel on the right, Primary1 experiences an outage. The HAM unbinds the collector from Primary1 and binds it to the designated spare. With the spare in use and no other spares in the HAM cluster, failover can no longer occur - even after Primary1 returns to service. For failover to be possible again, you must reassign Collector 1 to Primary1. This idles the collector process on the spare, making it available for the next failover operation if Primary 1 fails again. Note: When a designated spare serves as the only spare for a single primary, as in a 1+1 fixed spare cluster, the HAM pre-loads the primary's collector definition on the spare. This results in a fast failover with a likely loss of no more than one collection cycle. The following table shows the bindings that the HAM can and cannot make in this cluster: Collector Possible Host Bindings Host Bindings Not Possible Collector 1 Primary1 (default binding) Designated spare , fixed spare A fixed spare cluster with two primary hosts and one designated spare The figure below shows a fixed spare cluster with two primary hosts and one designated spare: Chapter 7. Using the High Availability Manager 109

122 v v v In the panel on the left, Primary1 and Primary2 are functioning normally. The designated spare is idle. In the panel on the right, Primary2 experiences an outage. The HAM unbinds the collector from Primary2 and binds it to the designated spare. With the spare in use and no other spares in the HAM cluster, failover can no longer occur - even after Primary2 returns to service. For failover to be possible again, you must reassign Collector 2 to Primary2. This idles the collector process on the spare, making it available for the next failover operation. The following table shows the bindings that the HAM can and cannot make in this cluster: Collector Possible Host Bindings Host Bindings Not Possible Collector 1 Collector 2 Primary1 (default binding) Designated spare Primary2 (default binding) Designated spare Primary2 Primary , both primaries are floating spares Both primaries are floating spares. The figure below shows a floating spare cluster with two primary hosts and one designated spare, with each primary configured as a floating spare: v v v In the panel on the left, Primary1 and Primary2 are functioning normally. The designated spare is idle. In the panel on the right, Primary2 experiences an outage. The HAM unbinds the collector from Primary2 and binds it to the designated spare. When Primary2 returns to service, it will assume the role of spare, meaning its collector process remains idle. The host originally defined as the dedicated spare continues as the active platform for Collector IBM Tivoli Netcool Performance Manager: Installation Guide

123 v v The following figure shows the same cluster after Primary2 has returned to service. In the panel on the left, Primary2 is idle, prepared to act as backup if needed. In the panel on the right, Primary1 experiences an outage. The HAM unbinds the collector from Primary1 and binds it to the floating spare, Primary2. The following table shows the bindings that the HAM can and cannot make in this cluster: Collector Possible Host Bindings Host Bindings Not Possible Collector 1 Collector 2 Primary1 (default binding) Primary2 Designated spare Primary1 Primary2 (default binding) Designated spare , fixed spares A fixed spare cluster with three primary hosts and two designated spares. The figure below shows a fixed spare cluster with three primary hosts and two designated spares: v In the panel on the left, all three primaries are functioning normally. The designated spares are idle. Chapter 7. Using the High Availability Manager 111

124 v In the panel on the right, Primary3 experiences an outage. The HAM unbinds the collector from Primary3 and binds it to Designated Spare 2. The HAM chose Designated Spare 2 over Designated Spare 1 because the managed definition for Collector 3 set the failover priority in that order. Note: Each managed definition sets its own failover priority. Failover priority can be defined differently in different managed definitions. v With one spare in use and one other spare available (Designated Spare 1), failover is now limited to the one available spare - even after Primary3 returns to service. For dual failover to be possible again, you must reassign Collector 3 to Primary3. The following table shows the bindings that the HAM can and cannot make in this cluster: Collector Possible Host Bindings Host Bindings Not Possible Collector 1 Primary1 (default binding) Primary2 Collector 2 Collector 3 Designated Spare 1 Designated Spare 2 Primary2 (default binding) Designated Spare 1 Designated Spare 2 Primary3 (default binding) Designated Spare 1 Designated Spare 2 Primary3 Primary1 Primary3 Primary1 Primary2 3+ 2, all primaries are floating spares A floating spare cluster with three primary hosts and two designated spares, with each primary configured as a floating spare. The figure below shows a floating spare cluster with three primary hosts and two designated spares, with each primary configured as a floating spare: 112 IBM Tivoli Netcool Performance Manager: Installation Guide

125 v v v In the panel on the left, Primary3 had previously experienced an outage. The HAM unbound its default collector (Collector 3) from Primary3, and bound the collector to the first available spare in the managed definition's priority list, which happened to be Designated Spare 2. Now that Primary3 is available again, it is acting as a spare, while Designated Spare 2 remains the active collector process for Collector 3. In the panel on the right, Primary2 experiences an outage. The HAM unbinds Collector 2 from Primary2, and binds it to the first available spare in the managed definition's priority list. This happens to be the floating spare Primary3. When Primary2 becomes available again, there will once more be two spares available - Primary2 and Designated Spare 1. The following table shows the bindings that the HAM can and cannot make in this cluster: Collector Possible Host Bindings Host Bindings Not Possible Collector 1 Collector 2 Primary1 (default binding) Primary2 Primary3 Designated Spare 1 Designated Spare 2 Primary1 Primary2 (default binding) Primary3 Designated Spare 1 Designated Spare Chapter 7. Using the High Availability Manager 113

126 Collector Possible Host Bindings Host Bindings Not Possible Collector 3 Primary1 Primary2 Primary3 (default binding) Designated Spare 1 Designated Spare 2 - Resource pools When you configure a managed definition in the Topology Editor, you specify the hosts that the HAM can bind to the managed definition, and also the priority order in which the hosts are to be bound. This list of hosts is called the resource pool for the managed definition. A resource pool includes: v The managed definition's primary host and collector process (that is, the host and collector process that are bound to the managed definition by default). v Zero or more other primary hosts in the cluster. If you add a primary host to a managed definition's resource pool, that primary host becomes a floating spare for the managed definition. v Zero or more designated spares in the cluster. Typically, each managed definition includes one or more designated spares in its resource pool. How the SNMP collector works Note: If no managed definitions include a designated spare in their resource pools, there will be no available spares in the cluster, and therefore failover cannot occur in the cluster. The SNMP collector capability and behaviour. The SNMP collector is state-based and designed both to perform initialization and termination actions, and to "change state" in response to events generated by the HAM or as a result of internally-generated events (like a timeout, for example). The following table lists the events that the SNMP collector understands and indicates whether they can be generated by the HAM. Event HAM-Generated Description Load Yes Load collection profile, do not begin scheduling collections. Pause Yes Stop scheduling collections; do not unload profile. Reset Yes Reset expiration timer. Start Yes Start scheduling collections. Stop Yes Stop scheduling collections; unload profile 114 IBM Tivoli Netcool Performance Manager: Installation Guide

127 Event HAM-Generated Description Timeout No Expiration timer expires; start scheduling collections. The SNMP collector can reside in one of the following states, as shown in the following table: SNMP Collector State Event Description Idle N/A Initial state; a collector number may or may not be assigned; the collection profile has not been loaded. Loading Load Intermediate state between Idle and Ready. Occurs after a Load event. Collector number is assigned, and the collection profile is being loaded. Ready N/A Collector number assigned, profile loaded, but not scheduling requests or performing collections. Starting Start Intermediate state between Idle and Running. Occurs after a Start event. Collector number assigned, and profile is being loaded. Running N/A Actively performing requests and collections. Stopping Stop/Pause Intermediate state between Running and Idle. The following state diagram shows how the SNMP collector transitions through its various states depending upon events or time-outs: How failover works with the HAM and the SNMP collector How Failover Works With the HAM and the SNMP Collector Chapter 7. Using the High Availability Manager 115

128 The following tables illustrate how the HAM communicates with the SNMP collectors during failover for a 1+1 cluster and a 2+1 cluster. Table 13. HAM and SNMP Collector in a 1+1 Cluster State of Primary State of Spare Events and Actions Running Idle The HAM sends the spare the Load event for the specified collection profile. Running Ready The HAM sends a Pause event to the spare to extend the timeout. Note: If the timeout expires, the spare will perform start actions and transition to a Running state. Running Running The HAM sends a Pause event to the collector process that has been in a Running state for a shorter amount of time. No response Ready The HAM sends a Start event to the spare. Table 14. HAM and SNMP Collector in a 2+1 Cluster State of Primary State of Spare Events and Actions Running Idle No action Running Ready No action Running Running The HAM sends a Stop event to the collector process that has been in Running state for the shorter amount of time. No Response Idle The HAM sends a Start event to the spare. No Response Ready The HAM sends a Start event to the spare. Because more than one physical system may produce SNMP collections, the File Transfer Engine (FTE) must check every capable system for a specific profile. The FTE retrieves all output for the specific profile. Any duplicated collections are reconciled by the Complex Metrics Engine (CME). Obtaining collector status How to get the status of a collector. To obtain status on the SNMP collectors managed by the HAM, enter the following command on the command line: $ dccmd status HAM.<hostname>.1 The dccmd command returns output similar to the following: 116 IBM Tivoli Netcool Performance Manager: Installation Guide

129 COMPONENT APPLICATION HOST STATUS ES DURATION EXTENDED STATUS HAM.DCAIX2.1 HAM DCAIX2 running Ok: (box1:3012 -> Running 1.1 for 5h2m26s); No avail spare; Check: dcaix2:3002, birdnestb: Ok: (box2:3002 -> Running 1.2 for 5h9m36s); No avail spare; Check: box4:3002, box5: Not Running; No avail spare; Check: box4:3002, box5:3002 Creating a HAM environment The following list describes EXTENDED STATUS information: v Load # Collection profile 1.1 v Ok: - Status of the load. Ok means it is properly collected, Not Running indicates a severe problem (data losses) v (box1:3012 -> Running 1.1 for 5h2m26s)- The collector that is currently performing the load, with its status and uptime. v No avail spare - List of possible spare, if something happens to the collector currently working. In this example there is no spare available, a failover would fail. A list of host:port would indicate the possible spare machines. v Check: box4:3002, box5: Indicates what is currently wrong with the system/configuration. Machines box4:3002 and box5:3002 should be spare but are either not running, or not reachable. The user is instructed to check these machines. For a 1-to-1 failover configuration, the dccmd command might return output like the following: $ dccmd status HAM.SERVER.1 COMPONENT APPLICATION HOST STATUS ES DURATION EXTENDED STATUS HAM.SERVER.1 HAM SERVER running Ok: (box1:3002 -> Running 1.1 for 5h2m26s); 1 avail spare: (box2:3002 -> Ready 1.1) This preceding output shows that Collector 1.1 is in a Running state on Box1, and that the Collector on Box2 is in a Ready state, with the profile for Collector 1.1 loaded. This section describes the steps required to create a 3+1 HAM environment with a single cluster, and with all three primaries configured as floating spares. About this task This is just one of the many variations a HAM environment can have. The procedures described in the following sections indicate the specific steps where you can vary the configuration. Note: If you are setting up a new Tivoli Netcool Performance Manager environment and plan to use a HAM in that environment, perform the following tasks in the following order: Procedure 1. Install all collectors. 2. Configure and start the HAM. 3. Install all technology packs. 4. Perform the discovery. Chapter 7. Using the High Availability Manager 117

130 Topology prerequisites The minimum component prerequisite. A 3+1 HAM cluster requires that you have a topology with the following minimum components: v v Three hosts, each bound to an SNMP collector. These will act as the primary hosts. You will create a managed definition for each of the primary hosts. One additional host that is not bound to an SNMP collector. This will act as the designated spare. For information on installing these components, see Adding a new component on page 94. Procedures The general procedures for creating a single-cluster HAM with one designated spare and three floating spares. Create the HAM and a HAM cluster To create a High Availability Manager with a single cluster Procedure 1. Start the Topology Editor (if it is not already running) and open the topology where you want to add the HAM (see Starting the Topology Editor on page 59 and Opening a deployed topology on page 93). 2. In the Logical view, right-click High Availability Managers, located at DataChannels > Administrative Components. 3. Select Add High Availability Manager from the pop-up menu. The Add High Availability Manager Wizard appears. 4. In the Available hosts field, select the host where you want to add the HAM. Note: You can install the HAM on a host where a collector process is installed, but you cannot install more than one HAM on a host. 5. In the Identifier field, accept the default identifier. The identifier has the following format: HAM.<HostName>.<n> where HostName is the name of the host you selected in Step 4, and n is a HAM-assigned sequential number, beginning with 1, that uniquely identifies this HAM from others that may be defined on other hosts. 6. Click Finish. The HAM identifier appears under the High Availability Managers folder. 7. Right-click the identifier of the HAM you just created. 8. Select Add Cluster from the pop-up menu. The Add Cluster Monitor Wizard appears. 9. In the Identifier field, type a name for the cluster and click Finish. The cluster name appears under the HAM identifier folder you added in Step 6. The following folders appear under the cluster name: v Collector Processes v Managed Definitions 118 IBM Tivoli Netcool Performance Manager: Installation Guide

131 Note: To add additional clusters to the environment, repeat Step 7 through Step 9. Add the designated spare How to create and add a designated spare. About this task To create a designated spare, you must have a host defined in the Physical view with no SNMP collector assigned to it. For information on adding a host to a topology, see Adding the hosts on page 60 Procedure 1. In the Logical view, right-click the Collector Processes folder that you created in Step 9 of the previous section, Create the HAM and a HAM cluster on page Select Add Collection Process SNMP Spare from the pop-up menu. The Add Collection Process SNMP Spare - Configure Collector Process SNMP Spare dialog appears. 3. In the Available hosts field, select the host that you want to make the designated spare. This field contains the names of hosts in the Physical view that do not have SNMP collectors assigned to them. 4. In the Port field, specify the default port number, 3002, for the spare's collector process, then click Finish. Under the cluster's Collector Processes folder, the entry Collection Process SNMP Spare <n> appears, where n is a HAM-assigned sequential number, beginning with 1, that uniquely identifies this designated spare from others that may be defined in this cluster. Note: Repeat Step 1 through Step 4 to add an additional designated spare to the cluster. What to do next Should you be making changes to an already existing configuration, please make sure the dataload.env file contains all the right settings: 1. Change to the directory where DataLoad is installed. For example: cd /opt/dataload 2. Source the DataLoad environment:../dataload.env 3. Make sure that DL_HA_MODE field in the dataload.env file and set to DL_HA_MODE=true. 4. Source the DataLoad environment again:../dataload.env Chapter 7. Using the High Availability Manager 119

132 Add the managed definitions A managed definition allows the HAM to bind a collector profile to a collector process. About this task Note: When you add a managed definition to a HAM cluster, the associated collector process is automatically added to the cluster's Collector Processes folder. Procedure 1. In the Logical view, right-click the Managed Definitions folder that you created in Create the HAM and a HAM cluster on page Select Add Managed Definition from the pop-up menu. The Add Managed Definition - Choose Managed Definition dialog appears. 3. In the Collector number field, select the unique collector number to associate with this managed definition. 4. Click Finish. The following entries now appear for the cluster: v Under the cluster's Managed Definitions folder, the entry Managed Definition <n> appears, where n is the collector number you selected in Step 3. v Under the cluster's Collector Processes folder, the entry Collector Process [HostName] appears, where HostName is the host that will be bound to the SNMP collector you selected in Step 3. This host is the managed definition's primary host. Note: Repeat Step 1 though to Step 4 to add another managed definition to the cluster. Example When you finish adding managed definitions for a 3+1 HAM cluster, the Logical and Physical views might look like the following: 120 IBM Tivoli Netcool Performance Manager: Installation Guide

133 In this example, the hosts dcsol1a, dcsol1b, and docserver1 are the primaries, and docserver2 is the designated spare. Define the resource pools A resource pool is a list of the spares, in priority order, that the HAM can bind to a particular managed definition. About this task When you create a managed definition, the managed definition's primary host is the only host in its resource pool. To enable the HAM to bind a managed definition to other hosts, you must add more hosts to the managed definition's resource pool. Procedure To add hosts to a managed definition's resource pool, follow these steps: 1. Right-click a managed definition in the cluster's Managed Definitions folder. 2. Select Configure Managed Definition from the pop-up menu. The Configure Managed Definition - Collector Process Selection dialog appears, as shown below. In this example, the resource pool being configured is for Managed Definition 1 (that is, the managed definition associated with Collector 1). Chapter 7. Using the High Availability Manager 121

134 3. In the Additional Collector Processes list, check the box next to each host to add to the managed definition's resource pool. Typically, you will add at least the designated spare (in this example, docserver2) to the resource pool. If you add a primary host to the resource pool, that host becomes a floating spare for the managed definition. Note: You must add at least one of the hosts in the Additional Collector Processes list to the resource pool. Since the goal in this example is to configure all primaries as floating spares, the designated spare and the two primaries (docserver1 and dcsol1a) will be added to the resource pool. 4. When finished checking the hosts to add to the resource pool, click Next. Note: If you add just one host to the resource pool, the Next button is not enabled. Click Finish to complete the definition of this resource pool. Return to Step 1 to define a resource pool for the next managed definition in the cluster, or skip to Save and start the HAM on page 123 if you are finished defining resource pools. The Configure Managed Definition - Collector Process Order dialog appears, as shown below: 122 IBM Tivoli Netcool Performance Manager: Installation Guide

135 5. Specify the failover priority order for this managed definition. To do so: a. Select a host to move up or down in the priority list, then click the Up or Down button until the host is positioned where you want. b. Continue moving hosts until the priority list is ordered as you want. c. Click Finish. In this example, if the primary associated with Managed Definition 1 fails, the HAM will attempt to bind the managed definition to the floating spare dcsol1a. Ifdcsol1a is in use or otherwise unavailable, the HAM attempts to bind the managed definition to docserver1. The designated spare docserver2 is last in priority. 6. Return to Step 1 to define a resource pool for the next managed definition in the cluster, or continue with the next section if you are finished defining resource pools. Save and start the HAM When you finish configuring the HAM as described in the previous sections, you are ready to save the configuration and start the HAM. Procedure 1. Click Topology > Save Topology to save the topology file containing the HAM configuration. 2. Run the deployer (see Starting the deployer on page 75), passing the updated topology file as input. 3. Open a terminal window on the DataChannel host. 4. Log in as pvuser. 5. Change your working directory to the DataChannel bin directory (/opt/datachannel/bin by default), as follows: cd /opt/datachannel/bin 6. Bounce (stop and restart) the Channel Manager. For instructions, see Step 15 on page Run the following command: dccmd start ham Monitoring of the HAM environment begins. For information on using dccmd, see the IBM Tivoli Netcool Performance Manager: Command Line Interface Guide. Chapter 7. Using the High Availability Manager 123

136 Creating an additional HAM environment How to create a HAM environment. Typically, one HAM is sufficient to manage all the collectors you require in your HAM environment. But for performance reasons, very large Tivoli Netcool Performance Manager deployments involving dozens or hundreds of collector processes might benefit from more than one HAM environment. HAM environments are completely separate from one another. A host in one HAM environment cannot fail over to a host in another HAM environment. To create an additional HAM environment, perform all of the procedures described in Creating a HAM environment on page 117. Modifying a HAM environment How to modify a HAM environment. You can modify a HAM environment by performing any of the procedures in Creating a HAM environment on page 117. For example, you can add collectors, add clusters, configure a primary host as a floating spare, change the failover priority order of a resource pool, and make a number of other changes to the environment, including moving collectors into or out of a HAM environment. For information on moving a deployed SNMP collector into or out of a HAM environment, see Moving a deployed SNMP collector to or from a HAM environment on page 98. You can also modify the configuration parameters of the HAM components that are writable. For information on modifying configuration parameters, see Changing configuration parameters of existing Tivoli Netcool Performance Manager components on page 96. Removing HAM components How to remove HAM components. You can remove HAM components from the environment by right-clicking the component name and selecting Remove from the pop-up menu. The selected component and any subcomponents will be removed. Before you can remove a designated spare (Collection Process SNMP Spare), you must remove the spare from any resource pools it may belong to. To remove a designated spare from a resource pool, open the managed definition that contains the resource pool, and clear the check box next to the name of the designated spare to remove. For information about managing resource pools, see Define the resource pools on page IBM Tivoli Netcool Performance Manager: Installation Guide

137 Stopping and restarting modified components How to stop and restart modified components. About this task If you change the configuration of a HAM or any HAM components, or if you add or remove an existing collector to or from a HAM environment, you must bounce (stop and restart) the Tivoli Netcool Performance Manager components you changed The is generally true for all Tivoli Netcool Performance Manager components that you change, not just HAM. Procedure To bounce a component, follow these steps: 1. Open a terminal window on the DataChannel host. 2. Log in as pvuser. 3. Change your working directory to the DataChannel bin directory (/opt/datachannel/bin by default), as follows: cd /opt/datachannel/bin 4. Run the bounce command in the following format: dccmd bounce <component> For example: v To bounce the HAM with the identifier HAM.dcsol1b.1, run: v v dccmd bounce ham.dcsol1b.1 To bounce all HAMs in the topology, run: dccmd bounce ham.*.* To bounce the FTE for collector 1.1 that is managed by a HAM, run: dccmd bounce fte.1.1 You do not need to bounce the HAM that the FTE and collector are in. For information on using dccmd, see the IBM Tivoli Netcool Performance Manager: Command Line Interface Guide. 5. Bounce the Channel Manager. For instructions, see Step 15. Viewing the current configuration During the process of creating or modifying a HAM cluster, you may find it useful to check how the individual collector processes and managed definitions are currently configured. Procedure To view the current configuration of a collector process or managed definition, follow these steps: 1. Right-click the collector process or managed definition to view. 2. Select Show from the pop-up menu. The Show Collector Process... or Show Managed Definition... dialog appears. The following sections describe the contents of these dialogs. Chapter 7. Using the High Availability Manager 125

138 Show Collector Process... dialog Dialog box description. The following figure shows a collector process configured with three managed definitions. The configuration values are described as follows: v dcsol1a. The primary host where this collector process runs. v The port through which the collector process receives SNMP data. v 3 2 (Primary) 1. The managed definitions that the HAM can bind to this collector process. The values have the following meanings: 3. The managed definition for Collector 3. 2 (Primary). The managed definition for Collector 2. This is the default managed definition for the collector process. 1. The managed definition for Collector 1. Show Managed Definition... dialog Dialog box description. The Show Managed Definition... dialog contains the resource pool for a particular managed definition. This dialog contains the same information that appears in the Show Collector Process... dialog, but for multiple hosts instead of just one. As such, this dialog gives you a broader view of the cluster's configuration than a Show Managed Definition... dialog. The following figure shows a managed definition's resource pool configured with four hosts: 126 IBM Tivoli Netcool Performance Manager: Installation Guide

139 Note the following about this managed definition's resource pool: v The priority order of the hosts is from top to bottom - therefore, the first collector process that the HAM will attempt to bind to this managed definition is the one on host dcsol1a. The collector process on host docserver2 is last in the priority list. v The first three hosts are floating spares. They are flagged as such by each having a primary managed definition. v The host docserver2 is the only designated spare in the resource pool. It is flagged as such by not having a primary managed definition. Chapter 7. Using the High Availability Manager 127

IBM Tivoli Federated Identity Manager Version Installation Guide GC

IBM Tivoli Federated Identity Manager Version Installation Guide GC IBM Tivoli Federated Identity Manager Version 6.2.2 Installation Guide GC27-2718-01 IBM Tivoli Federated Identity Manager Version 6.2.2 Installation Guide GC27-2718-01 Note Before using this information

More information

IBM Tivoli Netcool Performance Manager Wireline Component Document Revision R2E1. Properties Reference Guide IBM

IBM Tivoli Netcool Performance Manager Wireline Component Document Revision R2E1. Properties Reference Guide IBM IBM Tivoli Netcool Performance Manager 1.4.2 Wireline Document Revision R2E1 Properties Reference Guide IBM Note Before using this information and the product it supports, read the information in Notices

More information

Version Monitoring Agent User s Guide SC

Version Monitoring Agent User s Guide SC Tivoli IBM Tivoli Advanced Catalog Management for z/os Version 02.01.00 Monitoring Agent User s Guide SC23-7974-00 Tivoli IBM Tivoli Advanced Catalog Management for z/os Version 02.01.00 Monitoring Agent

More information

Error Message Reference

Error Message Reference Security Policy Manager Version 7.1 Error Message Reference GC23-9477-01 Security Policy Manager Version 7.1 Error Message Reference GC23-9477-01 Note Before using this information and the product it

More information

IBM Tivoli Monitoring for Web Infrastructure: WebSphere Application Server. User s Guide. Version SC

IBM Tivoli Monitoring for Web Infrastructure: WebSphere Application Server. User s Guide. Version SC IBM Tivoli Monitoring for Web Infrastructure: WebSphere Application Server User s Guide Version 5.1.1 SC23-4705-01 IBM Tivoli Monitoring for Web Infrastructure: WebSphere Application Server User s Guide

More information

IBM Tivoli Netcool Performance Manager Wireline Component Document Revision R2E2. Performance Guidelines IBM

IBM Tivoli Netcool Performance Manager Wireline Component Document Revision R2E2. Performance Guidelines IBM IBM Tivoli Netcool Performance Manager 1.4.1 Wireline Component Document Revision R2E2 Performance Guidelines IBM Note Before using this information and the product it supports, read the information in

More information

Installation Guide 1.3.1

Installation Guide 1.3.1 Tioli Netcool Performance Manager 1.3.1 Wireline Component Document Reision R1E10 Installation Guide 1.3.1 Note Before using this information and the product it supports, read the information in Notices

More information

IBM Security Access Manager for Enterprise Single Sign-On Version 8.2. Administrator Guide SC

IBM Security Access Manager for Enterprise Single Sign-On Version 8.2. Administrator Guide SC IBM Security Access Manager for Enterprise Single Sign-On Version 8.2 Administrator Guide SC23-9951-03 IBM Security Access Manager for Enterprise Single Sign-On Version 8.2 Administrator Guide SC23-9951-03

More information

IBM. Network Health Dashboard Reference - BETA. Network Manager IP Edition Version 4 Release 2

IBM. Network Health Dashboard Reference - BETA. Network Manager IP Edition Version 4 Release 2 Network Manager IP Edition Version 4 Release 2 Network Health Dashboard Reference - BETA IBM Restricted Materials of IBM R4.2 E1 Network Manager IP Edition Version 4 Release 2 Network Health Dashboard

More information

IBM SmartCloud Analytics - Log Analysis Version Installation and Administration Guide

IBM SmartCloud Analytics - Log Analysis Version Installation and Administration Guide IBM SmartCloud Analytics - Log Analysis Version 1.1.0.3 Installation and Administration Guide IBM SmartCloud Analytics - Log Analysis Version 1.1.0.3 Installation and Administration Guide Note Before

More information

IBM Tivoli Composite Application Manager for WebSphere Application Server Version 7.1. Installation Guide

IBM Tivoli Composite Application Manager for WebSphere Application Server Version 7.1. Installation Guide IBM Tivoli Composite Application Manager for WebSphere Application Server Version 7.1 Installation Guide IBM Tivoli Composite Application Manager for WebSphere Application Server Version 7.1 Installation

More information

IBM Netcool Operations Insight Version 1 Release 4. Integration Guide IBM SC

IBM Netcool Operations Insight Version 1 Release 4. Integration Guide IBM SC IBM Netcool Operations Insight Version 1 Release 4 Integration Guide IBM SC27-8601-00 Note Before using this information and the product it supports, read the information in Notices on page 249. This edition

More information

IBM. Planning and Installation. IBM Tivoli Workload Scheduler. Version 9 Release 1 SC

IBM. Planning and Installation. IBM Tivoli Workload Scheduler. Version 9 Release 1 SC IBM Tivoli Workload Scheduler IBM Planning and Installation Version 9 Release 1 SC32-1273-13 IBM Tivoli Workload Scheduler IBM Planning and Installation Version 9 Release 1 SC32-1273-13 Note Before using

More information

IBM Tivoli Netcool Performance Manager Wireline Component October 2015 Document Revision R2E1. Pack Upgrade Guide IBM

IBM Tivoli Netcool Performance Manager Wireline Component October 2015 Document Revision R2E1. Pack Upgrade Guide IBM IBM Tioli Netcool Performance Manager Wireline Component October 2015 Document Reision R2E1 Pack Upgrade Guide IBM Note Before using this information and the product it supports, read the information in

More information

IBM Tivoli Storage Manager for AIX Version Installation Guide IBM

IBM Tivoli Storage Manager for AIX Version Installation Guide IBM IBM Tivoli Storage Manager for AIX Version 7.1.3 Installation Guide IBM IBM Tivoli Storage Manager for AIX Version 7.1.3 Installation Guide IBM Note: Before you use this information and the product it

More information

Netcool Configuration Manager Version 6 Release 4. Reference Guide R2E3

Netcool Configuration Manager Version 6 Release 4. Reference Guide R2E3 Netcool Configuration Manager Version 6 Release 4 Reference Guide R2E3 Netcool Configuration Manager Version 6 Release 4 Reference Guide R2E3 Note Before using this information and the product it supports,

More information

IBM Tivoli Storage Manager for HP-UX Version Installation Guide IBM

IBM Tivoli Storage Manager for HP-UX Version Installation Guide IBM IBM Tivoli Storage Manager for HP-UX Version 7.1.4 Installation Guide IBM IBM Tivoli Storage Manager for HP-UX Version 7.1.4 Installation Guide IBM Note: Before you use this information and the product

More information

FileNet P8 Version 5.2.1

FileNet P8 Version 5.2.1 FileNet P8 Version 5.2.1 Plan and Prepare Your Environment for FileNet P8 for installation on Microsoft Windows with IBM DB2, IBM WebSphere Application Server, and IBM Tivoli Directory Server IBM GC19-3917-04

More information

IBM. IBM Tivoli Composite Application Manager for SOA WSRR Integration Guide

IBM. IBM Tivoli Composite Application Manager for SOA WSRR Integration Guide IBM Tivoli Composite Application Manager for SOA 7.2 Fix Pack 1 (updated November 2015) IBM Tivoli Composite Application Manager for SOA WSRR Integration Guide IBM SC27-4368-01 IBM Tivoli Composite Application

More information

Event Management Guide

Event Management Guide Network Manager IP Edition Version 4 Release 1.1 Event Management Guide R4.1.1 E2 Network Manager IP Edition Version 4 Release 1.1 Event Management Guide R4.1.1 E2 Note Before using this information and

More information

Federated Identity Manager Business Gateway Version Configuration Guide GC

Federated Identity Manager Business Gateway Version Configuration Guide GC Tivoli Federated Identity Manager Business Gateway Version 6.2.1 Configuration Guide GC23-8614-00 Tivoli Federated Identity Manager Business Gateway Version 6.2.1 Configuration Guide GC23-8614-00 Note

More information

IBM Spectrum Protect HSM for Windows Version Administration Guide IBM

IBM Spectrum Protect HSM for Windows Version Administration Guide IBM IBM Spectrum Protect HSM for Windows Version 8.1.0 Administration Guide IBM IBM Spectrum Protect HSM for Windows Version 8.1.0 Administration Guide IBM Note: Before you use this information and the product

More information

IBM SmartCloud Application Performance Management Entry Edition - VM Image Version 7 Release 7. Installation and Deployment Guide IBM SC

IBM SmartCloud Application Performance Management Entry Edition - VM Image Version 7 Release 7. Installation and Deployment Guide IBM SC IBM SmartCloud Application Performance Management Entry Edition - VM Image Version 7 Release 7 Installation and Deployment Guide IBM SC27-5334-01 IBM SmartCloud Application Performance Management Entry

More information

Event Management Guide

Event Management Guide Network Manager IP Edition Version 3 Release 9 Event Management Guide SC27-2763-04 Network Manager IP Edition Version 3 Release 9 Event Management Guide SC27-2763-04 Note Before using this information

More information

IBM Endpoint Manager Version 9.0. Software Distribution User's Guide

IBM Endpoint Manager Version 9.0. Software Distribution User's Guide IBM Endpoint Manager Version 9.0 Software Distribution User's Guide IBM Endpoint Manager Version 9.0 Software Distribution User's Guide Note Before using this information and the product it supports,

More information

Tivoli Access Manager for e-business

Tivoli Access Manager for e-business Tivoli Access Manager for e-business Version 6.1 Problem Determination Guide GI11-8156-00 Tivoli Access Manager for e-business Version 6.1 Problem Determination Guide GI11-8156-00 Note Before using this

More information

Business Service Manager Version Scenarios Guide SC

Business Service Manager Version Scenarios Guide SC Business Service Manager Version 6.1.0.1 Scenarios Guide SC23-6043-08 Business Service Manager Version 6.1.0.1 Scenarios Guide SC23-6043-08 Note Before using this information and the product it supports,

More information

IBM Tivoli Netcool Performance Manager Wireline Component June 2018 Document Revision R2E1. Installing Technology Packs IBM

IBM Tivoli Netcool Performance Manager Wireline Component June 2018 Document Revision R2E1. Installing Technology Packs IBM IBM Tivoli Netcool Performance Manager Wireline Component June 2018 Document Revision R2E1 Installing Technology Packs IBM Note Before using this information and the product it supports, read the information

More information

IBM Monitoring Agent for Citrix Virtual Desktop Infrastructure 7.2 FP3. User's Guide IBM SC

IBM Monitoring Agent for Citrix Virtual Desktop Infrastructure 7.2 FP3. User's Guide IBM SC IBM Monitoring Agent for Citrix Virtual Desktop Infrastructure 7.2 FP3 User's Guide IBM SC14-7487-02 IBM Monitoring Agent for Citrix Virtual Desktop Infrastructure 7.2 FP3 User's Guide IBM SC14-7487-02

More information

Installing and Configuring VMware Identity Manager Connector (Windows) OCT 2018 VMware Identity Manager VMware Identity Manager 3.

Installing and Configuring VMware Identity Manager Connector (Windows) OCT 2018 VMware Identity Manager VMware Identity Manager 3. Installing and Configuring VMware Identity Manager Connector 2018.8.1.0 (Windows) OCT 2018 VMware Identity Manager VMware Identity Manager 3.3 You can find the most up-to-date technical documentation on

More information

User s Guide for Software Distribution

User s Guide for Software Distribution IBM Tivoli Configuration Manager User s Guide for Software Distribution Version 4.2.1 SC23-4711-01 IBM Tivoli Configuration Manager User s Guide for Software Distribution Version 4.2.1 SC23-4711-01 Note

More information

IBM Proventia Management SiteProtector Installation Guide

IBM Proventia Management SiteProtector Installation Guide IBM Internet Security Systems IBM Proventia Management SiteProtector Installation Guide Version2.0,ServicePack8.1 Note Before using this information and the product it supports, read the information in

More information

Service Configuration Guide

Service Configuration Guide Business Service Manager Version 6.1 Service Configuration Guide SC23-6041-07 Business Service Manager Version 6.1 Service Configuration Guide SC23-6041-07 Note Before using this information and the product

More information

IBM Tivoli Directory Server

IBM Tivoli Directory Server IBM Tivoli Directory Server White Pages Version 6.1 SC23-7837-00 IBM Tivoli Directory Server White Pages Version 6.1 SC23-7837-00 Note Before using this information and the product it supports, read the

More information

IBM Maximo Anywhere Version 7 Release 6. Planning, installation, and deployment IBM

IBM Maximo Anywhere Version 7 Release 6. Planning, installation, and deployment IBM IBM Maximo Anywhere Version 7 Release 6 Planning, installation, and deployment IBM Note Before using this information and the product it supports, read the information in Notices on page 65. This edition

More information

IBM Tivoli Netcool Performance Manager Wireline Component October 2014 Document Revision R2E1. Pack Release Notes

IBM Tivoli Netcool Performance Manager Wireline Component October 2014 Document Revision R2E1. Pack Release Notes IBM Tivoli Netcool Manager Wireline Component October Document Revision R2E1 Pack Release Notes Note Before using this information and the product it supports, read the information in Notices on page 43.

More information

Tivoli Monitoring Agent for IBM Tivoli Monitoring 5.x Endpoint

Tivoli Monitoring Agent for IBM Tivoli Monitoring 5.x Endpoint Tivoli Monitoring Agent for IBM Tivoli Monitoring 5.x Endpoint Version 6.1.0 User s Guide SC32-9490-00 Tivoli Monitoring Agent for IBM Tivoli Monitoring 5.x Endpoint Version 6.1.0 User s Guide SC32-9490-00

More information

Installation and Configuration Guide

Installation and Configuration Guide Netcool Configuration Manager Version 6 Release 4 Installation and Configuration Guide R2E8 Netcool Configuration Manager Version 6 Release 4 Installation and Configuration Guide R2E8 Note Before using

More information

Network Problem Resolution Guide

Network Problem Resolution Guide Tivoli Network Manager IP Edition Version 3 Release 8 Network Problem Resolution Guide GC23-9903-02 Tivoli Network Manager IP Edition Version 3 Release 8 Network Problem Resolution Guide GC23-9903-02

More information

Hitachi Storage Command Portal Installation and Configuration Guide

Hitachi Storage Command Portal Installation and Configuration Guide Hitachi Storage Command Portal Installation and Configuration Guide FASTFIND LINKS Document Organization Product Version Getting Help Table of Contents # MK-98HSCP002-04 Copyright 2010 Hitachi Data Systems

More information

Service Configuration Guide

Service Configuration Guide Business Service Manager Version 6.1.1 Service Configuration Guide SC23-6041-09 Business Service Manager Version 6.1.1 Service Configuration Guide SC23-6041-09 Note Before using this information and the

More information

IBM Tivoli Agentless Monitoring for Windows Operating Systems Version (Revised) User's Guide SC

IBM Tivoli Agentless Monitoring for Windows Operating Systems Version (Revised) User's Guide SC IBM Tivoli Agentless Monitoring for Windows Operating Systems Version 6.2.1 (Revised) User's Guide SC23-9765-01 IBM Tivoli Agentless Monitoring for Windows Operating Systems Version 6.2.1 (Revised) User's

More information

Client Installation and User's Guide

Client Installation and User's Guide IBM Tivoli Storage Manager FastBack for Workstations Version 7.1.1 Client Installation and User's Guide SC27-2809-04 IBM Tivoli Storage Manager FastBack for Workstations Version 7.1.1 Client Installation

More information

Road map for a Typical installation of IBM Tivoli Monitoring, Version 5.1.0

Road map for a Typical installation of IBM Tivoli Monitoring, Version 5.1.0 Road map for a Typical installation of IBM Tivoli Monitoring, Version 5.1.0 Objective To use an installation wizard to deploy IBM Tivoli Monitoring and the Tivoli Management Framework in a typical Tivoli

More information

IBM Tivoli Decision Support for z/os Version Distributed Systems Performance Feature Guide and Reference IBM SH

IBM Tivoli Decision Support for z/os Version Distributed Systems Performance Feature Guide and Reference IBM SH IBM Tivoli Decision Support for z/os Version 1.8.2 Distributed Systems Performance Feature Guide and Reference IBM SH19-4018-13 IBM Tivoli Decision Support for z/os Version 1.8.2 Distributed Systems Performance

More information

IBM Spectrum Protect for Databases Version Data Protection for Microsoft SQL Server Installation and User's Guide IBM

IBM Spectrum Protect for Databases Version Data Protection for Microsoft SQL Server Installation and User's Guide IBM IBM Spectrum Protect for Databases Version 8.1.4 Data Protection for Microsoft SQL Server Installation and User's Guide IBM IBM Spectrum Protect for Databases Version 8.1.4 Data Protection for Microsoft

More information

Client Installation and User's Guide

Client Installation and User's Guide IBM Tivoli Storage Manager FastBack for Workstations Version 7.1 Client Installation and User's Guide SC27-2809-03 IBM Tivoli Storage Manager FastBack for Workstations Version 7.1 Client Installation

More information

Oracle Fusion Middleware

Oracle Fusion Middleware Oracle Fusion Middleware Installation Planning Guide 11g Release 1 (11.1.1.7.0) B32474-16 June 2013 Oracle Fusion Middleware Installation Planning Guide, 11g Release 1 (11.1.1.7.0) B32474-16 Copyright

More information

Oracle Fusion Middleware

Oracle Fusion Middleware Oracle Fusion Middleware Planning an Installation of Oracle Fusion Middleware 12c (12.2.1.2) E76887-02 November 2016 Documentation for installers and system administrators that describes how to plan and

More information

IBM Network Performance Insight Document Revision R2E1. Integrating IBM Tivoli Netcool/OMNIbus IBM

IBM Network Performance Insight Document Revision R2E1. Integrating IBM Tivoli Netcool/OMNIbus IBM IBM Network Performance Insight 1.1.0 Document Revision R2E1 Integrating IBM Tivoli Netcool/OMNIbus IBM Note Before using this information and the product it supports, read the information in Notices on

More information

IBM Tivoli Storage Manager Version Introduction to Data Protection Solutions IBM

IBM Tivoli Storage Manager Version Introduction to Data Protection Solutions IBM IBM Tivoli Storage Manager Version 7.1.6 Introduction to Data Protection Solutions IBM IBM Tivoli Storage Manager Version 7.1.6 Introduction to Data Protection Solutions IBM Note: Before you use this

More information

Tivoli Netcool Performance Manager - Application Studio Installation and User Guide

Tivoli Netcool Performance Manager - Application Studio Installation and User Guide IBM Tivoli Netcool Performance Manager Tivoli Netcool Performance Manager - Application Studio 1.4.0 Document Revision R2E5 Tivoli Netcool Performance Manager - Application Studio Installation and User

More information

SAS Model Manager 2.3

SAS Model Manager 2.3 SAS Model Manager 2.3 Administrator's Guide SAS Documentation The correct bibliographic citation for this manual is as follows: SAS Institute Inc. 2010. SAS Model Manager 2.3: Administrator's Guide. Cary,

More information

IBM. Planning and Installation. IBM Workload Scheduler. Version 9 Release 4

IBM. Planning and Installation. IBM Workload Scheduler. Version 9 Release 4 IBM Workload Scheduler IBM Planning and Installation Version 9 Release 4 IBM Workload Scheduler IBM Planning and Installation Version 9 Release 4 Note Before using this information and the product it

More information

Network Manager IP Edition Version 4 Release 2. Event Management Guide IBM R4.2 E4

Network Manager IP Edition Version 4 Release 2. Event Management Guide IBM R4.2 E4 Network Manager IP Edition Version 4 Release 2 Event Management Guide IBM R4.2 E4 Network Manager IP Edition Version 4 Release 2 Event Management Guide IBM R4.2 E4 Note Before using this information and

More information

IBM. Tivoli Netcool Performance Manager - Application Studio Installation and User Guide

IBM. Tivoli Netcool Performance Manager - Application Studio Installation and User Guide IBM Tivoli Netcool Performance Manager Tivoli Netcool Performance Manager - Application Studio 1.4.3 Document Revision R2E1 Tivoli Netcool Performance Manager - Application Studio Installation and User

More information

Netcool /Proviso 4.3-U. RFC DSL Application Pack User s Guide. Document Revision R2E1

Netcool /Proviso 4.3-U. RFC DSL Application Pack User s Guide. Document Revision R2E1 Netcool /Proviso 4.3-U RFC DSL Application Pack User s Guide Document Revision R2E1 Notices This information was developed for products and services offered in the U.S.A. IBM may not offer the products,

More information

Network Manager IP Edition Version 4 Release 1.1. Perl API Guide R4.1.1 E1

Network Manager IP Edition Version 4 Release 1.1. Perl API Guide R4.1.1 E1 Network Manager IP Edition Version 4 Release 11 Perl API Guide R411 E1 Network Manager IP Edition Version 4 Release 11 Perl API Guide R411 E1 Note Before using this information and the product it supports,

More information

IBM Security Access Manager for Enterprise Single Sign-On Version 8.2. Configuration Guide GC

IBM Security Access Manager for Enterprise Single Sign-On Version 8.2. Configuration Guide GC IBM Security Access Manager for Enterprise Single Sign-On Version 8.2 Configuration Guide GC23-9692-01 IBM Security Access Manager for Enterprise Single Sign-On Version 8.2 Configuration Guide GC23-9692-01

More information

IBM Spectrum Protect Version Introduction to Data Protection Solutions IBM

IBM Spectrum Protect Version Introduction to Data Protection Solutions IBM IBM Spectrum Protect Version 8.1.2 Introduction to Data Protection Solutions IBM IBM Spectrum Protect Version 8.1.2 Introduction to Data Protection Solutions IBM Note: Before you use this information

More information

Administrator s Guide. StorageX 8.0

Administrator s Guide. StorageX 8.0 Administrator s Guide StorageX 8.0 March 2018 Copyright 2018 Data Dynamics, Inc. All Rights Reserved. The trademark Data Dynamics is the property of Data Dynamics, Inc. StorageX is a registered trademark

More information

IBM Tivoli Storage Manager HSM for Windows Version 7.1. Administration Guide

IBM Tivoli Storage Manager HSM for Windows Version 7.1. Administration Guide IBM Tivoli Storage Manager HSM for Windows Version 7.1 Administration Guide IBM Tivoli Storage Manager HSM for Windows Version 7.1 Administration Guide Note: Before using this information and the product

More information

Plan, Install, and Configure IBM InfoSphere Information Server

Plan, Install, and Configure IBM InfoSphere Information Server Version 8 Release 7 Plan, Install, and Configure IBM InfoSphere Information Server on Windows in a Single Computer Topology with Bundled DB2 Database and WebSphere Application Server GC19-3614-00 Version

More information

IBM Spectrum Protect for Virtual Environments Version Data Protection for VMware Installation Guide IBM

IBM Spectrum Protect for Virtual Environments Version Data Protection for VMware Installation Guide IBM IBM Spectrum Protect for Virtual Environments Version 8.1.6 Data Protection for VMware Installation Guide IBM IBM Spectrum Protect for Virtual Environments Version 8.1.6 Data Protection for VMware Installation

More information

Network Manager IP Edition Version 3 Release 8. Administration Guide SC

Network Manager IP Edition Version 3 Release 8. Administration Guide SC Network Manager IP Edition Version 3 Release 8 Administration Guide SC23-9499-03 Network Manager IP Edition Version 3 Release 8 Administration Guide SC23-9499-03 Note Before using this information and

More information

IBM Tivoli Storage FlashCopy Manager Version Installation and User's Guide for Windows IBM

IBM Tivoli Storage FlashCopy Manager Version Installation and User's Guide for Windows IBM IBM Tivoli Storage FlashCopy Manager Version 4.1.3 Installation and User's Guide for Windows IBM IBM Tivoli Storage FlashCopy Manager Version 4.1.3 Installation and User's Guide for Windows IBM Note:

More information

BEA WebLogic Server Integration Guide

BEA WebLogic Server Integration Guide IBM Tivoli Access Manager for e-business BEA WebLogic Server Integration Guide Version 5.1 SC32-1366-00 IBM Tivoli Access Manager for e-business BEA WebLogic Server Integration Guide Version 5.1 SC32-1366-00

More information

IBM SmartCloud Application Performance Management UI Version User's Guide IBM SC

IBM SmartCloud Application Performance Management UI Version User's Guide IBM SC IBM SmartCloud Application Performance Management UI Version 7.7.0.1 User's Guide IBM SC22-5490-01 IBM SmartCloud Application Performance Management UI Version 7.7.0.1 User's Guide IBM SC22-5490-01 Note

More information

IBM Maximo Anywhere Version 7 Release 6. Planning, installation, and deployment IBM

IBM Maximo Anywhere Version 7 Release 6. Planning, installation, and deployment IBM IBM Maximo Anywhere Version 7 Release 6 Planning, installation, and deployment IBM Note Before using this information and the product it supports, read the information in Notices on page 71. This edition

More information

User sguidefortheviewer

User sguidefortheviewer Tivoli Decision Support for OS/390 User sguidefortheviewer Version 1.6 SH19-4517-03 Tivoli Decision Support for OS/390 User sguidefortheviewer Version 1.6 SH19-4517-03 Note Before using this information

More information

IBM Hyper-Scale Manager as an Application Version 1.8. User Guide IBM GC

IBM Hyper-Scale Manager as an Application Version 1.8. User Guide IBM GC IBM Hyper-Scale Manager as an Application Version 1.8 User Guide IBM GC27-5984-04 Note Before using this information and the product it supports, read the information in Notices on page 37. Management

More information

IBM Hyper-Scale Manager as an Application Version 1.7. User Guide GC

IBM Hyper-Scale Manager as an Application Version 1.7. User Guide GC IBM Hyper-Scale Manager as an Application Version 1.7 User Guide GC27-5984-03 Note Before using this information and the product it supports, read the information in Notices on page 35. Management Tools

More information

WebSphere Commerce Enterprise Commerce Professional

WebSphere Commerce Enterprise Commerce Professional WebSphere Commerce Enterprise Commerce Professional Version 6.0 Installation Guide for Linux GC10-4258-06 WebSphere Commerce Enterprise Commerce Professional Version 6.0 Installation Guide for Linux GC10-4258-06

More information

DEPLOYING VMWARE TOOLS USING SCCM USER GUIDE TECHNICAL WHITE PAPER - DECEMBER 2017

DEPLOYING VMWARE TOOLS USING SCCM USER GUIDE TECHNICAL WHITE PAPER - DECEMBER 2017 DEPLOYING VMWARE TOOLS USING SCCM USER GUIDE TECHNICAL WHITE PAPER - DECEMBER 2017 Table of Contents Intended Audience 3 Document conventions 3 Support 3 Deployment Workflow 4 System Requirements 5 Software

More information

IBM Security SiteProtector System SecureSync Guide

IBM Security SiteProtector System SecureSync Guide IBM Security IBM Security SiteProtector System SecureSync Guide Version 3.0 Note Before using this information and the product it supports, read the information in Notices on page 45. This edition applies

More information

IBM Spectrum Protect Snapshot Version Installation and User's Guide for Windows IBM

IBM Spectrum Protect Snapshot Version Installation and User's Guide for Windows IBM IBM Spectrum Protect Snapshot Version 8.1.4 Installation and User's Guide for Windows IBM IBM Spectrum Protect Snapshot Version 8.1.4 Installation and User's Guide for Windows IBM Note: Before you use

More information

IBM Spectrum Protect Snapshot Version Installation and User's Guide for Windows IBM

IBM Spectrum Protect Snapshot Version Installation and User's Guide for Windows IBM IBM Spectrum Protect Snapshot Version 8.1.2 Installation and User's Guide for Windows IBM IBM Spectrum Protect Snapshot Version 8.1.2 Installation and User's Guide for Windows IBM Note: Before you use

More information

ExpressCluster X 2.0 for Linux

ExpressCluster X 2.0 for Linux ExpressCluster X 2.0 for Linux Installation and Configuration Guide 03/31/2009 3rd Edition Revision History Edition Revised Date Description First 2008/04/25 New manual Second 2008/10/15 This manual has

More information

DB2. Migration Guide. DB2 Version 9 GC

DB2. Migration Guide. DB2 Version 9 GC DB2 DB2 Version 9 for Linux, UNIX, and Windows Migration Guide GC10-4237-00 DB2 DB2 Version 9 for Linux, UNIX, and Windows Migration Guide GC10-4237-00 Before using this information and the product it

More information

Object Server HTTP Interface Reference Guide

Object Server HTTP Interface Reference Guide Netcool/OMNIbus Version 7 Release 4 Object Server HTTP Interface Reference Guide SC27-5612-00 Netcool/OMNIbus Version 7 Release 4 Object Server HTTP Interface Reference Guide SC27-5612-00 Note Before

More information

Oracle Fusion Middleware

Oracle Fusion Middleware Oracle Fusion Middleware Creating Domains Using the Configuration Wizard 11g Release 1 (10.3.4) E14140-04 January 2011 This document describes how to use the Configuration Wizard to create, update, and

More information

Netcool Configuration Manager Version Administration Guide R2E4

Netcool Configuration Manager Version Administration Guide R2E4 Netcool Configuration Manager Version 6.4.1 Administration Guide R2E4 Netcool Configuration Manager Version 6.4.1 Administration Guide R2E4 Note Before using this information and the product it supports,

More information

Administrator s Guide. StorageX 7.8

Administrator s Guide. StorageX 7.8 Administrator s Guide StorageX 7.8 August 2016 Copyright 2016 Data Dynamics, Inc. All Rights Reserved. The trademark Data Dynamics is the property of Data Dynamics, Inc. StorageX is a registered trademark

More information

IBM Tivoli Netcool Performance Manager Wireline Component Document Revision R2E1. Installing and Using Tivoli Monitoring Agent IBM

IBM Tivoli Netcool Performance Manager Wireline Component Document Revision R2E1. Installing and Using Tivoli Monitoring Agent IBM IBM Tivoli Netcool Performance Manager 1.4.2 Wireline Component Document Revision R2E1 Installing and Using Tivoli Monitoring Agent IBM Note Before using this information and the product it supports, read

More information

vsphere Installation and Setup Update 2 Modified on 10 JULY 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5

vsphere Installation and Setup Update 2 Modified on 10 JULY 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 vsphere Installation and Setup Update 2 Modified on 10 JULY 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 You can find the most up-to-date technical documentation on the VMware website at:

More information

Client Installation and User's Guide

Client Installation and User's Guide IBM Tivoli Storage Manager FastBack for Workstations 6.1.2.0 Client Installation and User's Guide SC27-2809-01 IBM Tivoli Storage Manager FastBack for Workstations 6.1.2.0 Client Installation and User's

More information

ExpressCluster X 3.2 for Linux

ExpressCluster X 3.2 for Linux ExpressCluster X 3.2 for Linux Installation and Configuration Guide 5/23/2014 2nd Edition Revision History Edition Revised Date Description 1st 2/19/2014 New manual 2nd 5/23/2014 Corresponds to the internal

More information

ExpressCluster X R3 WAN Edition for Windows

ExpressCluster X R3 WAN Edition for Windows ExpressCluster X R3 WAN Edition for Windows Installation and Configuration Guide v2.1.0na Copyright NEC Corporation 2014. All rights reserved. Copyright NEC Corporation of America 2011-2014. All rights

More information

IBM Tivoli Composite Application Manager for Applications Version 7.3. WebSphere MQ Monitoring Agent User's Guide IBM SC

IBM Tivoli Composite Application Manager for Applications Version 7.3. WebSphere MQ Monitoring Agent User's Guide IBM SC IBM Tivoli Composite Application Manager for Applications Version 7.3 WebSphere MQ Monitoring Agent User's Guide IBM SC14-7523-01 IBM Tivoli Composite Application Manager for Applications Version 7.3

More information

Oracle Fusion Middleware Planning an Installation of Oracle Fusion Middleware. 12c ( )

Oracle Fusion Middleware Planning an Installation of Oracle Fusion Middleware. 12c ( ) Oracle Fusion Middleware Planning an Installation of Oracle Fusion Middleware 12c (12.2.1.3) E80584-01 August 2017 Oracle Fusion Middleware Planning an Installation of Oracle Fusion Middleware, 12c (12.2.1.3)

More information

Administrator s Guide. StorageX 7.6

Administrator s Guide. StorageX 7.6 Administrator s Guide StorageX 7.6 May 2015 Copyright 2015 Data Dynamics, Inc. All Rights Reserved. The trademark Data Dynamics is the property of Data Dynamics, Inc. StorageX is a registered trademark

More information

Installation and User's Guide

Installation and User's Guide IBM Systems Director Storage Control Installation and User's Guide Version 4 Release 2 IBM Systems Director Storage Control Installation and User's Guide Version 4 Release 2 Note Before using this information

More information

Version 11 Release 0 May 31, IBM Contact Optimization Installation Guide IBM

Version 11 Release 0 May 31, IBM Contact Optimization Installation Guide IBM Version 11 Release 0 May 31, 2018 IBM Contact Optimization Installation Guide IBM Note Before using this information and the product it supports, read the information in Notices on page 39. This edition

More information

Cisco Unified Serviceability

Cisco Unified Serviceability Cisco Unified Serviceability Introduction, page 1 Installation, page 5 Introduction This document uses the following abbreviations to identify administration differences for these Cisco products: Unified

More information

Central Administration Console Installation and User's Guide

Central Administration Console Installation and User's Guide IBM Tivoli Storage Manager FastBack for Workstations Version 7.1.1 Central Administration Console Installation and User's Guide SC27-2808-04 IBM Tivoli Storage Manager FastBack for Workstations Version

More information

Central Administration Console Installation and User's Guide

Central Administration Console Installation and User's Guide IBM Tivoli Storage Manager FastBack for Workstations Version 7.1 Central Administration Console Installation and User's Guide SC27-2808-03 IBM Tivoli Storage Manager FastBack for Workstations Version

More information

Tivoli Data Warehouse

Tivoli Data Warehouse Tivoli Data Warehouse Version 1.3 Tivoli Data Warehouse Troubleshooting Guide SC09-7776-01 Tivoli Data Warehouse Version 1.3 Tivoli Data Warehouse Troubleshooting Guide SC09-7776-01 Note Before using

More information

Installing DevPartner Java Edition Release 4.1

Installing DevPartner Java Edition Release 4.1 Installing DevPartner Java Edition Release 4.1 Technical support is available from our Technical Support Hotline or via our FrontLine Support Web site. Technical Support Hotline: 1-888-686-3427 Frontline

More information

Netcool Configuration Manager Version 6 Release 4. API Guide R2E1

Netcool Configuration Manager Version 6 Release 4. API Guide R2E1 Netcool Configuration Manager Version 6 Release 4 API Guide R2E1 Netcool Configuration Manager Version 6 Release 4 API Guide R2E1 Note Before using this information and the product it supports, read the

More information

Network Manager IP Edition Version 3 Release 9. Network Troubleshooting Guide IBM R2E2

Network Manager IP Edition Version 3 Release 9. Network Troubleshooting Guide IBM R2E2 Network Manager IP Edition Version 3 Release 9 Network Troubleshooting Guide IBM R2E2 Network Manager IP Edition Version 3 Release 9 Network Troubleshooting Guide IBM R2E2 Note Before using this information

More information