IBM. Tivoli. OMEGAMON Platform. Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

Size: px
Start display at page:

Download "IBM. Tivoli. OMEGAMON Platform. Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products"

Transcription

1 Tivoli OMEGAMON Platform IBM Candle Management Server 360, CandleNet Portal 196 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products GC

2 12 1 2

3 Tivoli OMEGAMON Platform IBM Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products GC

4 12 2 Note Before using this information and the product it supports, read the information in "Notices on page Fourth Edition (June 2005) This edition replaces GC Copyright Sun Microsystems, Inc Copyright International Business Machines Corporation 1996, All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.

5 Contents Figures Tables Preface About This Guide Documentation Conventions What s New Chapter 1. Overview of Historical Data Collection About Historical Data Collection Historical Collection Options Performance Impact of Historical Data Requests Chapter 2. Planning Collection of Historical Data Developing a Strategy for Historical Data Collection Chapter 3. Configuring Historical Data Collection on CandleNet Portal Overview Configuring Historical Data Collection Starting and Stopping Historical Data Collection Chapter 4. Configuring Historical Data Collection on CMW Invoking the HDC Configuration Program Using the Configuration Dialog to Control Historical Data Collection Defining Data Collection Rules Using the Advanced History Configuration Options Dialog Chapter 5. Warehousing Your Historical Data Prerequisites to Warehousing Historical Data Configuring Your Warehouse Preventing Historical Data File Corruption Error Logging for Warehoused Data Contents 5

6 Chapter 6. Converting History Files to Delimited Flat Files (Windows and OS/400) Conversion Process Archiving Procedure using LOGSPIN Archiving Procedure using the Windows AT Command Converting Files Using krarloff AS/400 Considerations Location of the Windows Executables and Historical Data Collection Table Files Chapter 7. Converting History Files to Delimited Flat Files (z/os) Automatic Conversion and Archiving Process Location of the z/os Executables and Historical Data Table Files Manual Archiving Procedure Chapter 8. Converting History Files to Delimited Flat Files (UNIX Systems) Understanding History Data Conversion Performing the History Data Conversion Chapter 9. Converting History Files to Delimited Flat Files (HP NonStop Kernel Systems).73 Conversion Process Appendix A. Maintaining the Persistent Data Store (CT/PDS) About the Persistent Data Store Components of the CT/PDS Overview of the Automatic Maintenance Process Making Archived Data Available Exporting and Restoring Persistent Data Data Record Format of Exported Data Extracting CT/PDS Data to Flat Files Command Interface Appendix B. Support Information Appendix C. Notices Index Historical Data Collection for IBM Tivoli OMEGAMON XE Products

7 Figures Figure 1. CandleNet Portal History Collection Configuration Configuration Tab Figure 2. CandleNet Portal History Collection Configuration Status Tab Figure 3. The Configure History Icon in the Administration Window Figure 4. CMW History Configuration Dialog Figure 5. CMS Selection Portion of Dialog Figure 6. Table or Group selection portion of dialog Figure 7. Advanced History Configuration Options dialog Figures 7

8 8 Historical Data Collection for IBM Tivoli OMEGAMON XE Products

9 Tables Table 1. Symbols in Command Syntax Table 2. Logfile parameter values Table 3. krarloff Parameters Table 4. DD Names Required Table 5. KPDXTRA parameters Table 6. History conversion parameters Table 7. Determining the medium for dataset backup Table 8. Section 1 Data Record Format Table 9. Section 2 Data Record Format Table 10. Section 2 Table Description Record Table 11. Section 2 Column Description Record Table 12. Section 3 Record Format Tables 9

10 10 Historical Data Collection for IBM Tivoli OMEGAMON XE Products

11 P Preface This document describes the use of the historical data collection capability in CandleNet Portal, the user interface for OMEGAMON XE products. It also describes the use of the historical data collection capability in the Candle Management Workstation. Before you can use any of the procedures or tools documented in this book, OMEGAMON Platform version must have been installed, including the following components: Candle Management Server (CMS) CandleNet Portal client (desktop or browser) CandleNet Portal Server CMS and CandleNet Portal support for any IBM Tivoli OMEGAMON XE monitoring products For instructions, see installation and configuration books on the OMEGAMON Platform and CandleNet Portal Documentation CD and the IBM Tivoli OMEGAMON XE product documentation CDs. If you intend to warehouse historical data, you must also have installed Microsoft s MS SQL Server relational database and the Candle Warehouse Proxy agent on Windows. Preface 11

12 About This Guide About This Guide Who should read this guide This guide is intended for those responsible for planning or configuring historical data collection for resources monitored by OMEGAMON XE products and for those responsible for maintaining the collected data. The historical data collection configuration, warehousing, and archiving tasks require a working knowledge of Windows, and MVS, OS/390, or z/os operating systems Microsoft s MS SQL Server relational database Document set information This book is part of the OMEGAMON XE Platform library. This section lists the other publications in the library and related documents. It also describes how to access Tivoli publications online and how to order Tivoli publications. OMEGAMON XE Platform library The following documents are available in the OMEGAMON XE Platform library: Administering OMEGAMON Products: CandleNet Portal, GC This document describes the support tasks and functions required for the OMEGAMON Platform, including CandleNet Portal user administration. Using OMEGAMON Products: CandleNet Portal, GC This guide describes the features of CandleNet Portal and how best to use them with your OMEGAMON products. Installing and Setting up OMEGAMON Platform and CandleNet Portal on Windows and UNIX, SC Provides instructions for installing and configuring the components of the OMEGAMON Platform and the CandleNet Portal interface. Configuring Candle Management Server (CMS) on z/os, GC Provides instructions for configuring and customizing the Candle Management Server on z/os. The following books document the Candle Management Workstation interface to the OMEGAMON products: Candle Management Workstation Administrator's Guide, GC Candle Management Workstation Quick Reference, GC Candle Management Workstation User's Guide, GC Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

13 About This Guide The following books document the messages issued by the OMEGAMON Platform components and products that run on it: IBM Tivoli Candle Products Messages Volume 1 (AOP ETX), SC IBM Tivoli Candle Products Messages Volume 2 (EU KLVGM), SC IBM Tivoli Candle Products Messages Volume 3 (KLVHS-KONCT), SC IBM Tivoli Candle Products Messages Volume 4 (KONCV-OC), SC IBM Tivoli Candle Products Messages Volume 5 (ODC VEB and Appendixes), SC The online glossary for the CandleNet Portal includes definitions for many of the technical terms related to OMEGAMON XE software. Accessing publications online The OMEGAMON Platform and CandleNet Portal Documentation CD contains the publications that are in the product library. The format of the publications is PDF. Refer to the readme file on the CD for instructions on how to access the documentation. IBM posts publications for this and all other Tivoli products, as they become available and whenever they are updated, to the Tivoli software information center Web site. Access the Tivoli software information center by first going to the Tivoli software library at the following Web address: Scroll down and click the Product manuals link. In the Tivoli Technical Product Documents Alphabetical Listing window, click the OMEGAMON XE Platform link to access the product library at the Tivoli software information center. If you print PDF documents on other than letter-sized paper, set the option in the File > Print window that allows Adobe Reader to print letter-sized pages on your local paper. Ordering publications You can order many Tivoli publications online at the following Web site: You can also order by telephone by calling one of these numbers: In the United States: In Canada: In other countries, see the following Web site for a list of telephone numbers: Tivoli technical training For Tivoli technical training information, refer to the following IBM Tivoli Education Web site: Preface 13

14 About This Guide Support information If you have a problem with your IBM software, you want to resolve it quickly. IBM provides the following ways for you to obtain the support you need: Searching knowledge bases: You can search across a large collection of known problems and workarounds, Technotes, and other information. Obtaining fixes: You can locate the latest fixes that are already available for your product. Contacting IBM Software Support: If you still cannot solve your problem, and you need to work with someone from IBM, you can use a variety of ways to contact IBM Software Support. For more information about these three ways of resolving problems, see Support Information on page 99. Participating in newsgroups User groups provide software professionals with a forum for communicating ideas, technical expertise, and experiences related to the product. They are located on the Internet and are available using standard news reader programs. These groups are primarily intended for user-to-user communication and are not a replacement for formal support. To access a newsgroup, use the instructions appropriate for your browser. 14 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

15 Documentation Conventions Documentation Conventions Overview This guide uses several conventions for special terms and actions, and operating system-dependent commands and paths. Panels and figures The panels and figures in this document are representations. Actual product panels may differ. Required blanks The slashed-b ( ) character in examples represents a required blank. The following example illustrates the location of two required blanks. eba*servicemonitor Revision bars Revision bars ( ) may appear in the left margin to identify new or updated material. Variables and literals In examples of z/os command syntax, uppercase letters are actual values (literals) that the user should type; lowercase letters are used for variables that represent data supplied by the user. Default values are underscored. LOGON APPLID (cccccccc) In the above example, you type LOGON APPLID followed by an application identifier (represented by cccccccc) within parentheses. Symbols The following symbols may appear in command syntax: Table 1. Symbols in Command Syntax Symbol Usage The or symbol is used to denote a choice. Either the argument on the left or the argument on the right may be used. Example: YES NO In this example, YES or NO may be specified. [ ] Denotes optional arguments. Those arguments not enclosed in square brackets are required. Example: APPLDEST DEST [ALTDEST] In this example, DEST is a required argument and ALTDEST is optional. Preface 15

16 Documentation Conventions Table 1. Symbols in Command Syntax Symbol { } Some documents use braces to denote required arguments, or to group arguments for clarity. Example: _ Usage COMPARE {workload} - REPORT={SUMMARY HISTOGRAM} The workload variable is required. The REPORT keyword must be specified with a value of SUMMARY or HISTOGRAM. Default values are underscored. Example: COPY infile outfile - [COMPRESS={YES NO}] In this example, the COMPRESS keyword is optional. If specified, the only valid values are YES or NO. If omitted, the default is YES. 16 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

17 W What s New Disk space requirements have moved Information about space requirements for the historical tables for OMEGAMON XE products, formerly contained in an appendix to this book, has been moved to the user s guide or getting started guide for the appropriate products. What s New 17

18 18 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

19 1 Overview of Historical Data Collection Introduction This chapter introduces historical data collection. Chapter Contents About Historical Data Collection Historical Collection Options Performance Impact of Historical Data Requests Overview of Historical Data Collection 19

20 About Historical Data Collection About Historical Data Collection Overview The Historical Data Collection (HDC) Configuration program, invoked from either CandleNet Portal or from the Candle Management Workstation (CMW) begins the collection of historical data. The program allows you to specify the collection of historical data at either the Candle Management Server (CMS) or at the remote system where the OMEGAMON XE monitoring agent is installed. For Candle Management Servers, you can optionally specify historical data to be warehoused. Candle monitoring agents can also warehouse data as long as they are connected to a CMS. The warehoused data is written to Microsoft s MS SQL Server relational database on Windows. See Warehousing Your Historical Data on page 47. Alternatively, you can continue to convert your historical data to delimited flat files or datasets using programs distributed with CandleNet Portal and with the CMW. You can then use the converted historical data with any reporting tool from a third-party vendor such as SAS or Microsoft Excel, or with other popular PC application tools to produce trend analysis reports and graphics. You can also load the converted data into relational databases such as DB2, ORACLE, Sybase, Microsoft SQL Server, or others and produce customized history reports. Managing your historical data It is vital that you either warehouse your historical data or convert your historical data to delimited flat files or datasets. Otherwise, your history data files will grow unchecked, using up valuable disk space. On the mainframe, datasets will fill and historical data will no longer be written. If you choose not to warehouse your data, you must institute rolloff jobs to regularly convert and empty out the history data files. This task is in addition to the main function of the rolloff programs, which is to convert the binary history data into readable text files. See the Converting Files to Delimited Flat Files chapters, as appropriate for your platform, for instructions. Collecting Short Term History In addition to the historical data collection reports, for which collection and conversion procedures are documented in this manual, CandleNet Portal and CMW provide a short term history reporting capability. You can find information on how to request short term history reports and how to specify the time interval for which you want short term history displayed in the individual product manuals in the discussion of product reports. There is information about and illustrations of the available short term status history reports in the Candle Management Workstation 20 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

21 About Historical Data Collection User s Guide. You can also find information on requesting history reports and on specifying time intervals in CandleNet Portal in the online help. To collect the data required for the generation of short term history reporting, you must start historical data collection as documented in Configuring Historical Data Collection on CMW on page 37 or in Configuring Historical Data Collection on CandleNet Portal on page 29. Overview of Historical Data Collection 21

22 Historical Collection Options Historical Collection Options Overview To provide flexibility in using historical data collection, you can: turn history collection on, or turn off all history collection for multiple selected Candle Management Servers and multiple selected tables for a product save the history file at the CMS or at the remote agent define what data to save; that is, select what columns of a history table should be collected define the periodic time interval to save data (05, 15, 30, or 60 minutes) define the number of intervals of history to retain before the data is warehoused to a relational data base using ODBC, or use product-provided scripts to convert historical data to delimited flat files. These options are mutually exclusive. Historical data collection can be specified for individual Candle Management Servers, products, and tables. However, all agents of the same type that report directly to the same CMS must have the same history collection options. Also, for a given history table, the same history collection options are applied to all Candle Management Servers for which that history table s collection is currently enabled. For example, if collection of UNIX Disk Performance (UNIXDPERF) is specified at the remote agent level, each UNIX agent running on a remote managed system collects historical data on that remote managed system. For Candle Management Servers, you can optionally specify historical data to be warehoused. Candle monitoring agents can also warehouse data as long as they are connected to a CMS. The warehoused data is written to Microsoft s SQL Server database on Windows. Note: This document describes using Version 360 of the Warehouse Proxy Agent to warehouse your historical data. Some Candle agents do not provide history data for all of their tables and attribute groups. This is because the applications group for that agent has determined that collecting history data for certain tables is not appropriate, or would have a detrimental effect on performance. This could be due to the vast amount of data that would be generated. Therefore, for each product, only tables that are available for history collection are listed in the History Collection Configuration dialog. If, after you configure history data for a table and start history collection, you still do not see history data for that table, there is a problem either with the agent collection of that data, or with the history mechanism. 22 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

23 Performance Impact of Historical Data Requests Performance Impact of Historical Data Requests Overview The impact of historical data collection and/or warehousing on OMEGAMON Platform components is dependent on multiple factors, including collection interval, number and size of historical tables collected, amount of data, system size, and so on. This section describes some of these factors. Impact on the CMS or the agent of large amounts of historical data The component specified for collecting and/or warehousing history data (either at the CMS or the agent) can be negatively impacted when processing large amounts of data. This can occur because the historical warehouse process on the CMS or the agent must read the large row set from the history data file. The data must then be transmitted to the Warehouse Proxy agent. For large datasets, this sometimes impacts memory and CPU resources. Because of its ability to handle numerous requests simultaneously, the impact on the CMS is not as great as the impact on the agent. Impact on the agent For agents processing a large data request, the agent may be prevented from processing other requests until the time-consuming request has completed. This is important with most agents because an agent can usually process only one report,one situation, or one warehousing request at a time. Requests for historical data from large tables Requests for historical data from tables that collect a large amount of data will have a negative impact on the performance of the OMEGAMON Platform components involved. To reduce the performance impact on your system, we recommend setting a longer collection interval for tables that collect a large amount of data. You specify this setting from the Configuration tab of the History Collection Configuration dialog. To find out the disk space requirements for tables in your OMEGAMON XE product, see Disk Space Requirements for Historical Data Tables on page 141. When you are viewing a report or a workspace for which you would like (short term) historical data, you can set the Time Span interval to obtain data for previous samplings. Selecting a long time span interval for the report time span increases the amount of data being processed, and may have a negative impact on performance. The program must dedicate more memory and CPU cycles to process a large volume of report data. In this instance, we recommend specifying a shorter time span setting, especially for tables that collect a large amount of data. If a report rowset is too large, the report request may drop the task and return to the CandleNet Portal or CMW with no rows because the agent took too long to process the request. However, the agent continues to process the report data to completion, and remains blocked, even though the report data is not viewable. Overview of Historical Data Collection 23

24 Performance Impact of Historical Data Requests There could also be cases where the historical report data from the Persistent Data Store might not be available. This could occur because the Persistent Data Store may be not be available while its maintenance job is running. Scheduling the warehousing of historical data The same issues with requesting large reports apply to scheduling the warehousing of historical data only once a day. The more data being warehoused at once requires many more resources to read data into memory, and to transmit to the Warehouse Proxy agent. If possible, we recommend making the warehousing rowset smaller by spreading the warehousing load over each hour, that is, by setting the warehouse interval to 1 hour. 24 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

25 2 Planning Collection of Historical Data Introduction This chapter provides information about selecting a strategy for historical data collection in your enterprise the components used by various platforms to accomplish historical data collection the tables used to collect historical data and their space requirements Chapter Contents Developing a Strategy for Historical Data Collection Planning Collection of Historical Data 25

26 Developing a Strategy for Historical Data Collection Developing a Strategy for Historical Data Collection Overview When developing a strategy for historical data collection, you must determine: the rules under which data will be collected; for example, How often do I want to collect historical data? Where do I want to collect the data at the Candle Management Server or at the location where the OMEGAMON XE monitoring agent is running? What data do I want to collect? how often you want to warehouse collected data whether scheduling of data conversion to delimited flat files should be automatic or manual Defining data collection rules Among the factors that should govern the frequency of historical data collection are such things as: How much disk storage will be required to store the data being collected? What use will be made of the collected data? For information about using the History Configuration Dialog to establish the rules under which data is collected, see Defining Data Collection Rules on page 41. Warehousing collected data The History Configuration program used by the Candle Management Workstation permits you to warehouse collected historical data to a database using ODBC. For additional information, see Specifying collection options on page 42. CandleNet Portal also allows you to warehouse collected historical data to a database using ODBC. See Configuring collection of attribute data on page 32. For instructions on configuring a database, see Warehousing Your Historical Data on page 47. Note: This document describes using Version 360 of the Candle Warehouse Proxy Agent to warehouse your historical data. Defining the data conversion process Data can be scheduled for conversion to delimited flat files either manually or automatically. If you choose to continue to convert data to delimited flat files, we strongly recommend that you schedule data conversion to be automatic. You will want to perform data conversion on a regular basis even if you are collecting historical data only to support short term history that is displayed on product reports. This is due to the fact that any historical data collection will result in use of system resources. 26 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

27 Developing a Strategy for Historical Data Collection Data conversion programs Programs are called to execute the conversion of history files to delimited flat files. The program that performs the conversion differs depending on your system environment. UNIX component The program to convert the binary history file to a delimited flat file is called krarloff Windows 2000, ME, or XP components The program to convert the binary history file to a delimited flat file is called krarloff. The program used to simulate the UNIX crontab command to archive historical data collection files on Windows Candle Management Servers and remote managed systems is called LOGSPIN.EXE MVS, OS/390, or z/os components The program to convert the binary history file to a delimited flat file is called KPDXTRA. Columns added to history data files and to meta description files Four columns are automatically added to the history data files and to the meta description files. These columns are: TMZDIFF. The time zone difference from Universal Time (GMT). This value is shown in seconds. WRITETIME. The CT timestamp when the record was written. This is a 16-character value in the format: cyymmddhhmmssttt, where: c = century yymmdd = year, month, day hhmmssttt = hours, minutes, seconds, milliseconds SAMPLES. Incremental counter for the number of samples written since the agent started. All rows written during the same interval have the same number. INTERVAL. The time between samples, shown in milliseconds. Note: The warehousing process only adds two columns (TMZDIFF and WRITETIME), to the warehouse database. See Warehousing Your Historical Data on page 47. For a sample meta description file, see Sample *.hdr meta description file on page 28. Meta description files A meta description file describes the format of the data in the source files. Meta description files are generated at the start of the historical data collection process. The various platforms use different file naming conventions. Here are the rules for some platforms: Planning Collection of Historical Data 27

28 Developing a Strategy for Historical Data Collection AS/400 and HP NonStop Kernel (formerly Tandem) Description files use the name of the data file as the base. The last character of the name is M. For example, for table QMLHB, the history data file name is QMLHB and the description file name is QMLHBM. z/os and earlier Description records are stored in the PDS facility, along with the data. UNIX Uses the *.hdr file naming convention. Windows Uses the *.hdr file naming convention. Sample *.hdr meta description file TMZDIFF(int,0,4)WRITETIME(char,4,16)QM_APAL.ORIGINNODE(char,20,128) QM_APAL.QMNAME(char,148,48)QM_APAL.APPLID(char,196,12) QM_APAL.APPLTYPE(int,208,4)QM_APAL.SDATE_TIME(char,212,16) QM_APAL.HOST_NAME(char,228,48)QM_APAL.CNTTRANPGM(int,276,4) QM_APAL.MSGSPUT(int,280,4)QM_APAL.MSGSREAD(int,284,4) QM_APAL.MSGSBROWSD(int,288,4)QM_APAL.INSIZEAVG(int,292,4) QM_APAL.OUTSIZEAVG(int,296,4)QM_APAL.AVGMQTIME(int,300,4) QM_APAL.AVGAPPTIME(int,304,4)QM_APAL.COUNTOFQS(int,308,4) QM_APAL.AVGMQGTIME(int,312,4)QM_APAL.AVGMQPTIME(int,316,4) QM_APAL.DEFSTATE(int,320,4)QM_APAL.INT_TIME(int,324,4) QM_APAL.INT_TIMEC(char,328,8)QM_APAL.CNTTASKID(int,336,4) SAMPLES(int,340,4)INTERVAL(int,344,4) For example, an entry may have the form: attribute_name(int,75,20) where int identifies the data as an integer, 75 is the starting column in the data file, and 20 is the length of the field for this attribute in the file. Estimating Space Required to Hold Historical Data Tables Historical data is written to performance attribute tables. Refer to the product documentation for assistance in determining the names of the tables in which historical data is stored and their size, as well as those tables that are defaults. Most products provide worksheets to assist you in estimating the size of the disk storage required to hold your enterprise s historical data. 28 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

29 3 Configuring Historical Data Collection on CandleNet Portal Introduction This chapter describes how to configure and manage the collection of historical data from CandleNet Portal. See Configuring Historical Data Collection on CMW on page 37, for instructions for configuring and managing historical data collection from the Candle Management Workstation. Before you begin CMS start-up must be complete and the CMS must be running before you attempt to configure historical data collection. If you choose to warehouse your historical data rather than convert it to delimited flat files, you must have installed and configured the relational database to which you will roll off the data via ODBC. Refer to the Installing and Setting up OMEGAMON Platform and CandleNet Portal on Windows and UNIX for details on installing the database to which you will write historical data. See Configuring Your Warehouse on page 49 for configuration information. Chapter Contents Overview Configuring Historical Data Collection Starting and Stopping Historical Data Collection Configuring Historical Data Collection on CandleNet Portal 29

30 Overview Overview Historical setup Configuring historical data collection involves specifying the attribute groups for which data is collected, the collection interval, the roll-off interval (to a data warehouse), if any, and where the collected data is stored (at the agent or the CMS). To ensure that data samplings are saved to populate your predefined historical workspaces, you must first configure and start historical data collection. This requirement does not apply to workspaces using attributes groups that are historical in nature and show all their entries without your starting data collection separately. Some agents do not provide history data for all of their attribute group tables. This is because the application development group for that agent has determined that collecting history data for certain tables is not appropriate or would have a detrimental effect on performance. This could be due to the vast amount of data that would be generated. Therefore, for each product, only tables that are available for history collection are listed in the History Collection Configuration dialog. See Configuring Historical Data Collection on page 32. Requirements for invoking the HDC configuration program In order to invoke the HDC Configuration program, you must have Configure History authority. The system administrator can grant this authority using the Administer Users, Permissions tab in CandleNet Portal. If you do not have the proper authority, you will not see the menu option or the toolbar option for historical configuration. See the Using OMEGAMON Products: CandleNet Portal document for more information. Data roll off On Windows and UNIX systems, historical data is collected in binary files.these files grow as new data gets added at every sampling interval. Their size can increase quickly and take up a great deal of space on the hard drive. And the larger a history file is, the longer it takes to retrieve historical data into views. On z/os systems, historical data is stored in data sets. If these datasets fill up and no empty datasets are available, future attempts to write data to any dataset in the group will fail. On z/os, you can configure the persistent data store (CT/PDS) to maintain the historical data sets. In addition, the OMEGAMON Platform has file conversion programs that move data out of the historical files or datasets to delimited text files and delete the stored information. See the chapter on converting files to delimited flat files appropriate for your platform for instructions. The long-term history feature offers a more permanent solution. The history files are maintained automatically because the data is periodically rolled off to an historical database (also called Candle Data Warehouse or data warehouse). To use long-term history, you must have configured your environment to include the Warehouse Proxy agent and Candle Data Warehouse (historical database) for storing long-term historical 30 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

31 Overview data. See Installing and Setting up OMEGAMON Platform and CandleNet Portal on Windows and UNIX and Warehousing Your Historical Data on page 47 for instructions. Viewing historical data The table view and the bar, pie, and plot charts in CandleNet Portal have a tool for setting a time span. This Time Span tool causes previously collected data samples to be reported up to the time specified. Your product may also have predefined workspaces with historical views. If, after you configure history data for a table and start history collection, you still do not see history data for that table, there is a problem either with the agent collection of that data, or with the history mechanism. Configuring Historical Data Collection on CandleNet Portal 31

32 Configuring Historical Data Collection Configuring Historical Data Collection Overview You use the History Collection Configuration dialog to: review current configuration of historical data collection for a specific CMS or product start or stop historical data collection specify how historical data is to be collected for a specific product on a specific CMS change existing specifications for data collection Accessing the History Collection Configuration dialog You access the History Collection Configuration dialog by clicking the icon on the toolbar or by selecting History Configuration from the Edit menu (Ctrl+H). If you do not see the icon or the menu option, your user ID does not have the proper authority. Configuring collection of attribute data The groups for which you want to collect data must be configured before you can start data collection. You use the Configuration tab to set up historical data collection (see Figure 2 on page 36). From the Configuration tab, you can specify: the product for which data is to be collected the attribute group or groups for which data is to be collected the interval at which data for a particular attribute group is collected the location at which the data is stored (either the agent or the CMS) the interval at which data is warehoused, if any If short term history data is not being warehoused, it accumulates indefinitely unless it is rolled off using the provided file conversion programs. If it is being warehoused, data older than 24 hours is automatically deleted. 32 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

33 Configuring Historical Data Collection Figure 1. CandleNet Portal History Collection Configuration Configuration Tab You can view the attribute groups for a selected product for which data collection is recommended by clicking Show Default Groups. Note: You cannot configure data collection for individual attributes from CandleNet Portal. If you want to exclude or include specific attributes in a group, you must configure collection from the CMW. See Configuring Historical Data Collection on CMW on page 37. Configuration tab To configure data collection for an attribute group or groups: 1. On the Configuration tab, select the product (agent type) for which you want to collect data. The attribute groups for which you can collect historical data appear in a list box. Configuring Historical Data Collection on CandleNet Portal 33

34 Configuring Historical Data Collection Note: When you select a product type, you are configuring collection for all agents of that type that report to the selected CMS. 2. Select one or more attribute groups, then use the radio buttons to select the interval for data collection, the location of data collection, and the interval for warehousing, if any. Note: The controls show the default settings when you first open the dialog. As you select attribute groups from the list, the controls do not change for the selected group. If you change the settings for a group, those changes continue to display no matter which group you select while the dialog is open. This enables you to adjust the configuration controls once and apply the same settings to any number of attribute groups (one after the other, or use Ctrl+click to select multiples or Shift+click to select all groups from the first one selected to this point). The true configuration settings show in the group list and in the Status tab. 3. Click Configure Group(s) to apply the configuration selections to the attribute group or groups. The values do not take effect unless you click this button. Changes made to the configuration of any group are automatically reflected on the Status tab for all Candle Management Servers on which collection for the changed groups is already started. It is not necessary to stop and then restart collection for a group whose configuration has changed. Note: Clicking Unconfigure Group(s) automatically stops collection for that group on all Candle Management Servers first. Configuring data collection for logs The CCC Logs apply to all applications. If you want to save the information in these logs, you should configure them for warehousing. You can configure historical data collection for any of the CCC Logs. Note: Although you can set up historical data collection for any of these logs, you can create a chart or table view for only TNODESTS (Managed System Change Log) and Situations Status Log. CandleNet Portal currently does not provide query support for KRAMESG (Universal Message Log), OPLOG (Operations Log), TEIBLOG (Enterprise Information Base Changes Log), or TWORKLST (Worklist Log). 34 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

35 Starting and Stopping Historical Data Collection Starting and Stopping Historical Data Collection Overview You start and stop historical data collection for a specific CMS from the Status tab of the History Collection Configuration dialog. The attribute groups for which you want to collect data must be configured before you can start data collection. See Configuring collection of attribute data on page 32. Starting historical data collection Use the Status tab of the History Collection Configuration dialog to view the configuration and collection status for each attribute group of a selected product on a selected CMS (see Figure 2 on page 36). You also use the Status tab to start and to stop collection. To start data collection for configured attribute groups: 1. On the Status tab, select a CMS from the dropdown list. 2. Select a product. 3. Select the attribute group or groups for which you want to start data collection. The attribute groups for which historical data collection has been configured are listed in the Collection Status table. Shift-click to select a contiguous groups, or Ctrl-click to select noncontiguous groups. 4. Click Start Collection. On distributed systems, two files are created for every attribute group selected: a configuration file with a.hdr extension and a binary history file with no extension. For example, if you select the Address Space CPU Utilization attribute group, the two history files are ASCPUUTIL.hdr and ASCPUUTIL. Configuring Historical Data Collection on CandleNet Portal 35

36 Starting and Stopping Historical Data Collection Figure 2. CandleNet Portal History Collection Configuration Status Tab Stopping data collection To stop data collection: 1. On the Status tab, select a CMS from the dropdown list. 2. Select a product. 3. Select the attribute group or groups for which you want to stop data collection. Shift-click to select a contiguous groups, or Ctrl-click to select noncontiguous groups. 4. Click Stop Collection. 36 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

37 4 Configuring Historical Data Collection on CMW Introduction You invoke the Historical Data Collection (HDC) Configuration program to start or to stop the collection of historical data. You define the rules for running the program using the History Configuration dialog, illustrated in this chapter. For information on configuring historical data collection on CandleNet Portal, see Configuring Historical Data Collection on CandleNet Portal on page 29. Before you begin The CMS start-up must be complete and the CMS must be running before you attempt to configure historical data collection. If you choose to warehouse your historical data rather than convert it to delimited flat files, you must have installed and configured the relational database to which you will roll off the data via ODBC. Refer to the Installing and Setting up OMEGAMON Platform and CandleNet Portal on Windows and UNIX for details on installing the database to which you will write historical data. See Chapter 4, Configuring Your Warehouse on page 49 for configuration information. Chapter Contents Invoking the HDC Configuration Program Using the Configuration Dialog to Control Historical Data Collection Defining Data Collection Rules Using the Advanced History Configuration Options Dialog Configuring Historical Data Collection on CMW 37

38 Invoking the HDC Configuration Program Invoking the HDC Configuration Program Requirements for invoking the HDC Configuration program In order to invoke the HDC Configuration program, you must have the appropriate authority to launch the program. The system administrator can grant this authority using the Authority Settings window. If you do not have appropriate authority to launch the Configure History program, the associated icon will not appear in the Administration - Icons window. Steps to invoke the HDC Configuration Program To invoke the HDC Configuration program: 1. Access the CMW Administration - Icons window (Figure 3). 2. From the CMW Administration - Icons window, double-click the Configure History icon. CMW displays the CCC History Configuration dialog Figure 3. The Configure History Icon in the Administration Window About the History Configuration dialog Using the History Configuration dialog, you can: review current settings for historical data collection for a specific CMS or product start or stop historical data collection specify how historical data is to be collected for a specific product on a specific CMS or on multiple Candle Management Servers. You can now configure history for multiple servers and multiple tables at one time change existing specifications for data collection 38 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

39 Invoking the HDC Configuration Program Figure 4. CMW History Configuration Dialog Configuring Historical Data Collection on CMW 39

40 Using the Configuration Dialog to Control Historical Data Collection Using the Configuration Dialog to Control Historical Data Collection Specifying configuration options On the CCC History Configuration dialog, you can select: Display current configuration to display the collection status for each table for the currently selected product. If you have selected multiple Candle Management Servers, the Tables list box will show the collection status for the first selected CMS. A button labelled Next... will be visible which, if selected, updates the Tables list box with the status for the next selected CMS. You can continue to select the Next... button until you have displayed the status for each selected server. Note:If you use this dialog to change your current configuration, the changes you make may not be immediately reflected in the Tables list box, since the request must be transmitted to and processed by each CMS. You may need to refresh the status of the Tables list box after a few seconds by selecting the Display Configuration button before your changes become evident. Start default collection to begin historical data collection for those product tables defined as defaults. A confirmation message box pops up giving you the option of cancelling your request. If you select Cancel, the Tables list box is updated to show those tables that have been designated as defaults. In this manual, you can refer to Disk Space Requirements for Historical Data Tables on page 141 for information about the default historical tables for your installed Candle products. Stop all history collection to stop all historical data collection for the selected product on all selected Candle Management Servers. Start collection to begin collection for the tables that are currently selected Note: Historical information will not be recorded unless you press Start collection. Advanced configuration to display a dialog that permits you to specify the subset of a table s attributes that are to be collected. (By default, all of a table s attributes are collected.) You can also access the Advanced History Configuration Options dialog by double-clicking a table or tables displayed in the Select Table(s) box. Help to receive information about panel options Quit to exit historical data collection. Selecting Quit stops the configuration program. 40 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

41 Defining Data Collection Rules Defining Data Collection Rules Overview You can specify these historical data collection values: one or more Candle Management Servers you wish to configure for a product you will select from a pulldown menu. Servers must be online to be configured. the product for which historical data is to be collected the name of the group(s) or table(s) for which historical data is to be collected the collection interval the location where data is to be collected--either at the CMS or at the location where the agent is running how often data is to be rolled off to a warehouse. Selecting the target Candle Management Server(s) On the CCC History Configuration dialog, the Select CMS target(s) field displays the identifier for the hub CMS and any Candle Management Servers attached to that hub. You can refresh the list of target Candle Management Servers by selecting the Rebuild CMS List pushbutton. Selecting Rebuild CMS List causes the displayed list of available Candle Management Servers to be refreshed with any CMS started or stopped since the list was last displayed Figure 5. CMS Selection Portion of Dialog Selecting a product The pulldown menu in the Select a Product field shows all of the Candle products installed in your environment. From this pulldown list, select the product or application for which you want historical data collected. Selecting Group(s) In the Select Table(s) field, you can control whether the list box displays the actual table name or the Group name (the default) for each table. By clicking the appropriate button, you can view the list by Group name or by Table name. Depending on your selection, a list is displayed that contains the available groups or tables for which historical data can be collected. Configuring Historical Data Collection on CMW 41

42 Defining Data Collection Rules For each entry in the list, the following are displayed(figure 6): Group Name or Table Name: Name of the group or table for which historical data will be collected Collection Interval: Collection interval currently specified for the named group or table, or OFF Collection Location: Collection location currently specified for the named group or table Warehouse Interval: The frequency at which historical data is rolled off to your Candle data warehouse Filename: Name of the binary file to which raw historical data is written at each collection interval Figure 6. Table or Group selection portion of dialog Specifying collection options Using the Table or Group selection portion of the dialog (Figure 6), you can specify the following collection options for historical data. Collection Interval: The interval at which historical data is collected. For example, specifying 5 causes historical data to be collected at the end of every 5 minute period. You can specify values of 5, 15, or 30 minutes, or 1 hour. Using this field, select Off to turn off collection for the selected CMS target(s) and associated product without affecting historical data collection on other Candle Management Servers or agents. Collect Data At: The location at which data is to be collected--either at the remote agent or at the CMS to which the agent is connected. Note: If you use the Advanced Configuration button to provide a custom definition, and if collection is started once a custom definition is in place, the history data will be collected at the CMS regardless of the setting of the Collection Location radio button. Warehouse every: The frequency at which historical data is rolled off to your Candle data warehouse. If you do not want to warehouse your historical data, select Off. Filename: Name of the binary file to which raw historical data is written each collection interval 42 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

43 Defining Data Collection Rules Note: Historical information will not be recorded unless you press Start collection. Note: Warehousing data to an ODBC data base is mutually exclusive with running data conversion programs on your historical data. If you choose to continue to run your data conversion scripts, you will want to select Off for the Warehouse every option. Runtime Information The message field at the bottom of the CCC History Configuration dialog can display status information pertaining to the current or most recently completed request. Configuring Historical Data Collection on CMW 43

44 Using the Advanced History Configuration Options Dialog Using the Advanced History Configuration Options Dialog Overview If you select Advanced configuration from the CCC History Configuration dialog. or if you double-click on a table or tables displayed in the Select Table(s) box, the Advanced History Configuration Options dialog displays. Use this dialog to select the attributes for which you want historical data to be collected. Note: To avoid the corruption of historical data files, you must roll off and delete existing history data files and meta files prior to modifying the Advanced History Configuration options when storing history data at the CMS. See Preventing Historical Data File Corruption on page 50. Figure 7. Advanced History Configuration Options dialog Use the Add and Remove buttons to add attributes to or remove attributes from the Selected and Available Attributes lists respectively. Add All and Remove All move the entire contents of one list to the other. You can also double-click an attribute in one list to move it to the other. To obtain a list of the attributes currently being collected, click Current settings. Reset deletes any customized attribute subset you may have created so that next time collection is started for the table, the default, that is, all attributes, is selected. 44 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

45 Using the Advanced History Configuration Options Dialog When the Selected Attributes list is complete, select OK. This creates a local, custom configuration definition for the selected table that exists until the history configuration application terminates or you select the Reset button. This custom definition takes effect when historical data collection is next started for that table. Every product, other than CCC Logs, requires that you specify at least the System_Name attribute as well as one other column. Special considerations for CCC Logs The CCC Logs, a group of enterprise information base (EIB) tables for which history is available, requires that you specify the Global_Timestamp attribute and at least one other column. The collection interval and location, as well as Warehouse interval, are fixed for the Status_History, EIB_Changes, Policy_Status, and System_Status logs, as follows: Collection Interval : once a day Collection Location: at the CMS Warehouse Interval: Once per day See the Candle Management Workstation User s Guide for additional information on the CCC Logs. See also the Candle Management Workstation Administrator s Guide for a detailed description of the Display Item attribute. This attribute is used to more easily differentiate situations. You can view the results in the Status History log. Universal Agent history configuration Generally, each product is shipped with a file that is installed into the CMW s SQLLIB directory. This file contains all of the definitions required by the Historical Data Collection Configuration program to start and stop historical data collection. Because the tables and attributes collected by Universal Agents are defined by you, the history definition file is not available to the CMW. For Universal Agents, history definitions are created dynamically from the agent s attribute file. This file is retrieved from the agent by the CMW when the agent comes online. There are no default tables for Universal Agents. If a new Universal Agent comes online after the Historical Data Collection Configuration application has started, you will need to restart this application before history collection can be configured for the new agent. Configuring Historical Data Collection on CMW 45

46 Using the Advanced History Configuration Options Dialog 46 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

47 5 Warehousing Your Historical Data Introduction Several steps are required in order to warehouse your historical data to a supported relational database using ODBC. Other considerations must also be addressed. This chapter provides guidance on warehousing historical data. Note: This document describes using Version 360 of the Candle Warehouse Proxy Agent to warehouse your historical data. Before you begin Refer to the Installing and Setting up OMEGAMON Platform and CandleNet Portal on Windows and UNIX for details on installing the database to which you will write historical data. That database must be installed before you can begin rolling off historical data to it. Also, review Configuring Historical Data Collection on CMW on page 37 or Configuring Historical Data Collection on CandleNet Portal on page 29 for information about using the Historical Data Collection program on the appropriate user interface and using the history configuration dialogs. Chapter Contents Prerequisites to Warehousing Historical Data Configuring Your Warehouse Preventing Historical Data File Corruption Error Logging for Warehoused Data Warehousing Your Historical Data 47

48 Prerequisites to Warehousing Historical Data Prerequisites to Warehousing Historical Data Overview In order to use ODBC to warehouse historical data, your enterprise must first: 1. Install Microsoft SQL Server. 2. Define a user ID and password. Important: In SQL Server, the user ID must be a member of the db_owner Fixed Database Role located in the Database/Roles menu. When the user ID exists in db_owner, all of the Warehouse Proxy objects in the database have the same user ID as the owner ID, and the new tables and columns are correctly inserted into the database. 3. Use the Windows ODBC Administrator to add and to configure a data source called Candle Data Warehouse. The data source be called Candle Data Warehouse. No other name is acceptable. Configure the data source to point to the SQL Server that is to be used for warehousing historical data. 4. Start the Warehouse Proxy Agent on a Windows system in the network. Configure the Candle Data Warehouse ODBC data source on the same system. You are now ready to use data warehousing. Note: For mainframe products, in addition to configuring ODBC and SQL Server, you must set up historical data collection by defining Persistent Data Store (CT/PDS) datasets. You must also set up the required maintenance tasks to ensure the availability of these datasets. See Maintaining the Persistent Data Store (CT/PDS) on page 75. Historical data collection can be configured to be stored at any combination of the CMS or the agents. To ensure that history data is received from all sources, you must configure a common shared network protocol between the Warehouse Proxy agent and the component that is sending history data to it (from either a CMS or an agent). For example, you might have a CMS configured to use both IP and IP.PIPE. In addition, one agent might be configured with IP and a second agent with IP.PIPE. In this example, the Warehouse Proxy agent must be configured to use both IP and IP.PIPE. About the Warehouse Proxy agent The Warehouse Proxy agent uses ODBC to write the historical data to a supported relational database. Only one Warehouse Proxy agent can be configured and running in your enterprise at one time. This proxy agent can handle warehousing requests from all managed systems in the enterprise. The proxy agent should be connected to the hub CMS. We recommend, if possible, installing the proxy agent on the same machine on which the warehouse database resides. See Installing and Setting up OMEGAMON Platform and CandleNet Portal on Windows and UNIX for details regarding installation of the Warehouse Proxy agent. 48 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

49 Configuring Your Warehouse Configuring Your Warehouse Overview You use the history data collection configuration program in Candle Management Workstation (CMW) and in CandleNet Portal to specify how often data is rolled off to a relational database. Naming of warehoused history tables Warehoused history tables in the database have the same names as the group names of history tables. For example, Windows Servers history for group name NT_System is collected in a binary file having the name WTSYSTEM. Historical data in this file, WTSYSTEM, is warehoused to the database in a table named NT_System. The following UNIX history tables are exceptions to the foregoing. User and Disk groups are exported to the database to tables having the names UNIXUSER and UNIXDISK. This is due to the fact that User and Disk are reserved words in SQL Server. Tables named UNIXUSER and UNIXDISK cannot be queried using MS/Query. Columns added to the warehouse database Two columns are automatically added to the warehouse database. These are: TMZDIFF. The time zone difference from Universal Time (GMT). This value is shown in seconds. WRITETIME. The CT timestamp when the record was written. This is a 16-character value in the format: cyymmddhhmmssttt, where: c = century (1 = 21st century) yymmdd = year, month, day hhmmssttt = hours, minutes, seconds, milliseconds Attributes formatting Some attributes need to be formatted for display purposes. For example, floating point numbers that specify a certain number of precision digits to be printed to the left of a decimal point. These display formatting considerations are specified in product attribute files. The Warehouse Database displays the correct attribute formatting only for those attributes that use integers with floating point number formats. Logging successful exports of historical data Every successful export of historical data is logged in Candle Data Warehouse in a table called WAREHOUSELOG. The WAREHOUSELOG contains information such as origin node, table to which the export occurred, number of rows exported, time the export took place, and so forth. You can query this table to learn about the status of your exported history data. Warehousing Your Historical Data 49

50 Preventing Historical Data File Corruption Preventing Historical Data File Corruption Overview Because history data storage on non-z/os platforms uses flat files that are not indexed, corruption of historical data can occur. If history data is stored at either the agent or at the CMS, it is important to roll off the existing history data files and meta files into text files. You then delete the history data files and meta files at the agent or at the CMS for the selected tables to avoid corruption of the warehoused database tables. See Converting History Files to Delimited Flat Files (Windows and OS/400) on page 53. Note: This situation does not apply to z/os history data as this data is stored in the Persistent Data Store (CT/PDS) facility. To avoid the corruption of historical data files, you must roll off and delete existing data files prior to: modifying the Advanced History Configuration options when storing history data at the CMS. See Using the Advanced History Configuration Options Dialog on page 44. upgrading an existing monitoring agent to a new release when storing history data at the agent. See Installing and Setting up OMEGAMON Platform and CandleNet Portal on Windows and UNIX for installation instructions. Preventing corruption when storing data at the CMS If you store historical data at the CMS, perform the procedure that follows prior to using the Advanced History Configuration options: 1. Save, roll off, or export the existing history data that is stored at the CMS for the selected table. 2. Delete the CMS history data files and meta files for the selected table only. 3. If you are warehousing the data, save or rename the existing database table, in case you want to retain the data for later use. 4. Using the SQL DROP command, delete the database table. You may now make modifications to the Advanced History Configuration options. Preventing corruption when storing data at the agent If you store historical data at the monitoring agent, perform the procedure that follows prior to upgrading the agent to a new release. You perform this procedure when you can identify which, if any, product tables have added new attributes. If you are unsure about newly added attributes, perform the procedure for all existing product history tables. 1. Save, roll off, or export the existing history data files that are stored at the agent. 2. Delete the agent history data and the meta files. 50 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

51 Preventing Historical Data File Corruption 3. If you are warehousing the data, save or rename the existing database table, in case you want to retain the data for later use. 4. Using the SQL DROP command, remove the database table. You may now proceed with the agent upgrade. If your database is corrupted If your database is corrupted, you can repair the database using this procedure: 1. Stop the Warehouse Proxy agent. 2. Stop the collecting of historical data. 3. Delete the history data files and the meta files. 4. If you are warehousing the data, save or rename the existing database table, in case you want to retain the data for later use. 5. Using the SQL DROP command, delete the database table. 6. Return to the Historical Data Collection program, Advanced History Configuration option, and select your attributes. If you think you might want to add to the table later, select all of the attributes now. You can always go back and remove the attributes that you don t want. Once you remove the attributes, the table will still be big enough for attributes that you might want to add later. Note: You cannot configure data collection for individual attributes from CandleNet Portal. If you want to exclude or include specific attributes in a group, you must configure collection from the CMW. See Configuring Historical Data Collection on CMW on page Start collecting data. 8. Restart the Warehouse Proxy agent. The SQL Server recreates the database tables. Warehousing Your Historical Data 51

52 Error Logging for Warehoused Data Error Logging for Warehoused Data Viewing errors in the Event Log Should an error occur during data rolloff, one or more entries are inserted into the Windows Application Event Log that is created on the system where the Warehouse Proxy is running. To view the Application Event Log, start the Event Viewer by clicking Start> Programs> Administrative Tools> Event Viewer. Select Application from the Log pull-down menu. (On Windows XP, click Start > Control Panel > Administrative tools > Event Viewer.) Setting a trace option You can set error tracing on to capture additional error messages that can be helpful in detecting problems. Activating the trace option To activate the trace option: 1. Click Start > (All) Programs > Candle OMEGAMON XE > Manage Candle Services 2. Right-click Warehouse Proxy and select Advanced > Edit Trace Parms. The Trace Parameters for Warehouse Proxy dialog displays. 3. Select the RAS1 filters. The default setting is ERROR. 4. Enter the path and file name of the RAS1.log file that will contain the error messages for the warehouse proxy. For example: c:\candle\cma\logs\khdras1.log where khd indicates the product code for the warehouse proxy. 5. Enter the KDC_DEBUG setting. None is the default. Viewing the Trace Log To view the trace log containing the error messages: 1. Select Start > Programs > Candle OMEGAMON XE > Manage Candle Services. 2. Right-click Warehouse Proxy and select Advanced > View Trace Log. The Log Viewer window displays the log file for the warehouse proxy agent. 52 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

53 6 Converting History Files to Delimited Flat Files (Windows and OS/400) Introduction Warehousing data to an ODBC data base is mutually exclusive with running the file conversion programs described in this chapter. To use these conversion procedures, you must have specified Off for the Warehouse option on the History Configuration panel for the CMW and on the History Collection Configuration dialog for CandleNet Portal. The history files collected using the rules established in the historical data collection configuration program can be converted to delimited flat files for use in a variety of popular applications to easily manipulate the data and create reports and graphs. Use the LOGSPIN program or the Windows AT command to schedule file conversion automatically. Use the krarloff program to manually invoke file conversion. (The LOGSPIN program invokes krarloff when file conversion is scheduled automatically.) For best results, you should schedule conversion to run every day. This is especially important on OS/400. Chapter Contents Conversion Process Archiving Procedure using LOGSPIN Archiving Procedure using the Windows AT Command Converting Files Using krarloff AS/400 Considerations Location of the Windows Executables and Historical Data Collection Table Files Converting History Files to Delimited Flat Files (Windows and OS/400) 53

54 Conversion Process Conversion Process Overview When setting up the process that will convert the history files you have collected to delimited flat files, you can choose to schedule the process automatically using the LOGSPIN program or the Windows AT command, or manually by running the krarloff program. The LOGSPIN program invokes krarloff. Before deciding on which method to use, see the Microsoft Windows library for full details on the security implications of choosing to run a program such as LOGSPIN versus entering the Windows AT command. Important: Candle recommends running history file conversion every 24 hours. 54 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

55 Archiving Procedure using LOGSPIN Archiving Procedure using LOGSPIN Overview To convert historical data files on Windows Candle Management Servers and remote managed systems, follow these steps. Parameters for the logfile program are described in Logfile parameters on page Create a text file with each entry corresponding to the history table file to be converted. The text file must be located on each managed system on which data conversion is performed. The format of each line of the text file is: logfile {SIZE=nnn TIME=hh:mm} [HEADER=(Y/N) DELIM=c OUTPUT=filestem RFILE=tempname KEEP=(Y/N)] The parameters in brackets are optional and the parameters in braces are required. 2. To start archiving historical data on the remote managed system, enter the following at the command prompt: or LOGSPIN filename [archpathname] start LOGSPIN filename [archpathname] where: filename is the name of the text file described above and is required. archpathname is the name of the path where the archive program is located. This is optional, and the default is to use the Windows search sequence. Note: Entering the start LOGSPIN command automatically opens an additional window and runs the command in the background. 3. To stop archiving historical data on the remote managed system, enter the following at the command prompt: LOGSPIN STOP Converting History Files to Delimited Flat Files (Windows and OS/400) 55

56 Archiving Procedure using LOGSPIN Logfile parameters The table below describes the parameters that correspond to the krarloff program defaults. Table 2. Logfile parameter values Parameter logfile SIZE TIME HEADER DELIM OUTPUT RFILE KEEP Description Name of the historical table to be converted/archived. Archive file at six-hour intervals if it exceeds nnnk bytes. The SIZE and TIME parameters are mutually exclusive. Archive the file once a day at the time specified in the format hh:mm. The SIZE and TIME parameters are mutually exclusive. Specify Y to include a descriptive header in the archived file. The default is N. Character to be used as a column delimiter. The default is a TAB character. Output filename for archived files. The suffix BK0 BK6 is appended to each file, with BK0 representing the latest archive and BK6 the earliest. If no output filename is specified, the default is the first part of the log filename for an (8.3) filename, or the first 32 characters for a long filename. Intermediate filename used by the LOGSPIN program. The default is the first part of the log filename for an (8.3) filename followed by.tmp, or the first 32 characters for a long filename followed by.tmp. Specify Y to keep the intermediate file. The default is N (spintime, spinsize default). 56 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

57 Archiving Procedure using the Windows AT Command Archiving Procedure using the Windows AT Command Overview To archive historical data files on Windows Candle Management Servers and on remote managed systems using the AT command, use the procedure that follows. To find out the format of the command, enter AT /? at the MS/DOS command prompt. 1. In order for the AT command to function, you must start the Task Scheduler service. To start the Task Scheduler service, select Settings >Control Panel > Administrative Tools > Services. Result: The Services window displays. 2. At the Services window, select Task Scheduler. Change the service Start Type to Automatic. Click Start. Result: The Task Scheduler service is started. An example of using the AT command to archive the history files is as follows: AT 23:30 /every:m,t,w,th,f,s,su c:\sentinel\cms\archive.bat In this example, Windows will execute the archive.bat file located in c:\sentinel\cms everyday at 11:30 pm. An example of the contents of archive.bat is: krarloff -o memory.txt wtmemory krarloff -o physdsk.txt wtphysdsk krarloff -o process.txt wtprocess krarloff -o system.txt wtsystem Converting History Files to Delimited Flat Files (Windows and OS/400) 57

58 Converting Files Using krarloff Converting Files Using krarloff Overview When initiated by LOGSPIN, the krarloff program makes an intermediate copy of the captured history binary file. This copy is processed while history data continues to be collected in the emptied original file. History file conversion can occur whether or not the CMS or the agent is running. You can also manually initiate krarloff as described below. The krarloff program can be run either at the CMS or in the directory in which the agent is running, from the directory in which the history files are stored. See Location of the Windows Executables and Historical Data Collection Table Files on page 61. Parameters for the krarloff program are described in krarloff Parameters on page 59. Attributes formatting Some attributes need to be formatted for display purposes. For example, floating point numbers that specify a certain number of precision digits to be printed to the left of a decimal point. These display formatting considerations are specified in product attribute files. When you use krarloff to roll off historical data into a text file, any attributes that require format specifiers as indicated in the attribute file are ignored. Only the raw number is seen in the rolled off history text file. Thus, instead of displaying 45.99% or 45.99, the number 4599 appears. The Warehouse Proxy agent does use the product attribute files to display the correct attribute formatting. However, the Candle Warehouse Database displays the correct attribute formatting only for those attributes that use integers with floating point number formats. See Warehousing Your Historical Data on page 47. Using krarloff on Windows Run the krarloff command from the directory in which the CMS or the agent is run by entering the following at the command prompt: krarloff [-h] [-d delimiter] [-g] [-m meta-file] [-r rename-to-file] [-o output-file] {-s source source-filename} where the square brackets denote the optional parameters, and the curly braces denote a required parameter. Note: The command is on a single line when typed. Using krarloff on OS/400 Run the krarloff command from an OS/400 in the directory in which the CMS is run by entering the following at the command prompt: call qautomon/krarloff parm ([ -h ] [ -g ] [ -d delimiter ] [ -m meta-file] [ -r rename-source-file-to] [ -o output-file] { -s source-file source-file )} 58 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

59 Converting Files Using krarloff where the square brackets denote the optional parameters, and the curly braces denote a required parameter. If you run krarloff from an OS/400 in the directory in which the agent is running, replace qautomon with the name of the executable for your agent. For example, the MQ agent would use kmqlib in the command string. Note: The command is on a single line when typed. krarloff Parameters Table 3. krarloff Parameters Parameter Default Value Description -h off Controls the presence or absence of the header in the output file. If present, the header is printed as the first line. The header identifies the attribute column name. -d tab Delimiter used to separate fields in the output text file. Valid values are any single character (for example, a comma). -g off Controls the presence or absence of the product group_name in the header of the output file. Add the -g to the invocation line for krarloff to include a group_name.attribute_name in the header. -m source-file.hdr Meta-file that describes the format of the data in the source file. If no meta-file is specified on the command line, the default filename is used. -r source-file.old Rename-to-filename parameter used to rename the source file. If the renaming operation fails, the script waits two seconds and retries the operation. -o source-file.nnn where nnn is Julian day Output filename. The name of the file containing the output text file. -s none Required parameter. Source binary history file that contains the data that needs to be read. Within the curly brace, the vertical bar ( ) denotes that you can either use an -s source option, or if a name with no option is specified, it is considered a source filename. No defaults are assumed for the source file. Converting History Files to Delimited Flat Files (Windows and OS/400) 59

60 AS/400 Considerations AS/400 Considerations Where is the historical data stored on the AS/400? User data is stored in QUSRSYS. For each table, there are two files stored on OS/400 that are associated with historical data collection. For example, if you are collecting data for the system status attributes, these two files are KA4SYSTS and KA4SYSTSM. The former is the binary data that is being output by the OMA. The second file is the metafile. The metafile is a file having a single row that contains the names of the columns. The contents of both files can be displayed using DSPPFM. What happens after krarloff is run? In using the system status example above, after running krarloff, file KA4SYSTS becomes KA4SYSTSO. A new KA4SYSTS file is generated when another row of data is available. KA4SYSTSM remains untouched. KA4SYSTSH is the file that is output by krarloff and containing the data is delimited flat file format. This file can be transferred from the AS/400 to the workstation by means of a file transfer program (FTP). 60 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

61 Location of the Windows Executables and Historical Data Collection Table Files Location of the Windows Executables and Historical Data Collection Table Files Location of Windows executables Executables are located as follows: \candle\cms directory on the CMS, where candle is the directory in which the CMS was installed \candle\cma directory on the remote managed systems, where candle is the directory in which the agents were installed Note: The krarloff conversion program must be located in the same directory as the LOGSPIN.EXE program. Location of Windows historical data table files If you run the CMS and agents as processes or as services, the historical data table files are located in the \candle\cms directory on the CMS, where candle is the directory in which the CMS was installed \candle\cma\logs directory on the remote managed systems, where candle is the directory in which the agents were installed Location of history configuration files on Windows The history configuration files are located in \candle\cms\sqllib. Converting History Files to Delimited Flat Files (Windows and OS/400) 61

62 Location of the Windows Executables and Historical Data Collection Table Files 62 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

63 7 Converting History Files to Delimited Flat Files (z/os) Introduction The history files collected by the rules established in the Historical Data Collection Configuration program or by your definitions related to historical data collection during product installation can be converted to delimited flat files automatically as part of your persistent data store maintenance procedures (see Maintaining the Persistent Data Store (CT/PDS) on page 75), or manually using a MODIFY command. You can use the delimited flat file as input to a variety of popular applications to easily manipulate the data and create reports and graphs. Data that has been warehoused cannot be extracted since the warehoused data is deleted from the persistent data store. To use these conversion procedures, you must have specified Off for the Warehouse option on the History Configuration panel for the CMW and on the History Collection Configuration dialog for CandleNet Portal. Chapter Contents Automatic Conversion and Archiving Process Location of the z/os Executables and Historical Data Table Files Manual Archiving Procedure Converting History Files to Delimited Flat Files (z/os) 63

64 Automatic Conversion and Archiving Process Automatic Conversion and Archiving Process Overview When you customized your OMEGAMON environment, you were given the opportunity to specify the EXTRACT option for maintenance. Specification of the EXTRACT option ensures that scheduling of the process to convert and archive information stored in your history data tables is automatic. No further action on your part is required. As applications write historical data to the history data tables, the persistent data store detects when a given data set is full, launches the KPDXTRA process to copy the data set, and notifies the Candle Management Server (CMS) that the data set can once again be used to receive historical information. Additional information about the persistent data store can be found in Maintaining the Persistent Data Store (CT/PDS) on page 75. An alternative to the automatic scheduling of conversion is the ability to manually issue the command to convert the historical data files. Information about manually converting your files is found in Manual Archiving Procedure on page 68 Converting Files Using KPDXTRA The conversion program, KPDXTRA, is called by the persistent data store maintenance procedures when the EXTRACT option is specified for maintenance. This program reads a dataset containing the collected historical data and writes out two files for every table that has data collected for it. The processing of this data does not interfere with the continuous collection being performed. Because the process is automatic, a brief overview of the use of KPDXTRA is provided here. For full information about KPDXTRA, review the sample JCL distributed with your OMEGAMON XE product. The sample JCL is found as part of the sample job KPDXTRA contained in the sample libraries RKANSAM and TKANSAM. Attributes formatting Some attributes need to be formatted for display purposes. For example, floating point numbers that specify a certain number of precision digits to be printed to the left of a decimal point. These display formatting considerations are specified in product attribute files. When you use KDPEXTRA to roll off historical data into a text file, any attributes that require format specifiers as indicated in the attribute file are ignored. Only the raw number is seen in the rolled off history text file. Thus, instead of displaying 45.99% or 45.99, the number 4599 appears. About KPDXTRA KPDXTRA runs in the batch environment as part of the maintenance procedures. It is capable of taking a parameter that allows the default column separator to be changed. The z/os JCL syntax for executing this command is: // EXEC PGM=KPDXTRA,PARM= PREF=dsn-prefix [DELIM=xx] [NOFF] 64 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

65 Automatic Conversion and Archiving Process Several files must be allocated for this job to run. In version and later, all datasets are kept in read/write state even if they are not active. This makes the datasets unavailable if the CMS is running. That is, jobs cannot be run against the active datasets and the inactive datasets must be taken offline. You can dynamically remove a dataset from the CMS by issuing the modify command: F stcname,kpdcmd QUIESCE FILE=DSN:dataset If you must run a utility program against an active data store, issue a SWITCH command prior to issuing this QUIESCE command. DDNAMES required to be allocated for KPDXTRA The following is a summary of the DDnames that must be allocated for KPDXTRA. Refer to the sample JCL in the Sample Libraries distributed with the product for additional information. Table 4. DD Names Required RKPDOUT KPDXTRA log messages RKPDLOG RKPDIN RKPDIN1 RKPDIN2 Persistent data store (CT/PDS) messages Table definition commands file (input to CT/PDS subtask) as set up by CICAT CT/PDS file from which data is to be extracted Optional control file defined as a DUMMY DD statement KPDEXTRA parameters The table that follows specifies the KPDEXTRA parameters. Table 5. KPDXTRA parameters Parameter Default Value Description PREF= none Required parameter. Identifies the high level qualifier where the output files will be written. DELIM= tab Specifies the separator character to use between columns in the output file. The default is a tab character X 05. To specify some other character, specify the 2-byte hexadecimal representative for that character. For example, to use a comma, specify DELIM=6B. QUOTE NQUOTE Optional parameter that puts double quotes around all character type fields. Trailing blanks are removed from the output. Makes the output format of the KPDXTRA program identical in format to the output generated by the distributed krarloff program. Converting History Files to Delimited Flat Files (z/os) 65

66 Automatic Conversion and Archiving Process Table 5. KPDXTRA parameters NOFF off Causes the creation (if set to ON) or omission (if set to OFF) of a separate file (header file) that contains the format of the tables. Also controls the presence or absence of the header in the output data file that is created as a result of the extract operation. If OFF is specified, the header file will not be created but the header information is included as the first line of the data file. The header information shows the format of the extracted data. KPDXTRA messages These messages can be found in the RKPDOUT sysout logs created by the execution of the maintenance procedures: Persistent datastore Extract program KPDXTRA - Version V Using output file name prefix: CCCHIST.PDSGROUP The following characters will be used to delimit output file tokens: Column values in data file...: 0x05 Parenthesized list items in format file: 0x6b Note: Input control file not found; all persistent data will be extracted. Table(s) defined in persistent datastore file CCCHIST.PDSGROUP.PDS#1: Appl. Table Extract Name Name Status PDSSTATS PDSCOMM Excluded PDSSTATS PDSDEMO Included PDSSTATS PDSLOG Included PDSSTATS TABSTATS Included Checking availability of data in data store file: No data found for Appl: PDSSTATS Table: PDSDEMO. Table excluded. No data found for Appl: PDSSTATS Table: TABSTATS. Table excluded. The following 1 table(s) will be extracted: Appl. Table No. Oldest Newest Name Name Rows Row Row PDSSTATS PDSLOG /01/10 05:51: /02/04 02:17:54 Starting extract operation. Starting extract of PDSSTATS.PDSLOG. The output data file, CCCHIST.PDSGROUP.D70204.PDSLOG, does not exist; it will be created. The output format file, CCCHIST.PDSGROUP.F70204.PDSLOG, does not exist; it will be created. Extract completed for PDSSTATS.PDSLOG. 431 data rows retrieved, 431 written. Extract operation completed. 66 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

67 Location of the z/os Executables and Historical Data Table Files Location of the z/os Executables and Historical Data Table Files Location of z/os executables Executables are located in the &hilev.&midlev.rkanmod or &hilev.&midlev.tkanmod library, where: &hilev is the library in which the CMS was installed &midlev is the name you have provided at installation time. Location of z/os historical data table files The historical data files created by the extraction program are located in the following library structure: &hilev.&midlev.&dsnlolev.tablename.d &hilev.&midlev.&dsnlolev.tablename.h where: &hilev qualifier is the library in which the CMS was installed. &midlev is the name you have provided at installation time. &dsnlolev is the low-level qualifier of the dataset names as set by the configuration tool. tablename can be up to 10 characters. When the tablename is greater than 8 characters, the tablename portion of the dataset contains the first 8 characters followed by a period, with the remaining characters of the name appended. Datasets with a name ending with D represent data output. Datasets with a name ending with H represent header or format output. Converting History Files to Delimited Flat Files (z/os) 67

68 Manual Archiving Procedure Manual Archiving Procedure Converting historical files manually To manually convert historical data files on the z/os CMS and on the remote managed systems, issue the following MODIFY command: F stcname,kpdcmd SWITCH GROUP=cccccccc EXTRACT where: stcname identifies the name of the started task that is running either the CMS or MVS agents. cccccccc identifies the group name associated with the persistent data store allocations. The values for cccccccc may vary based on which products are installed. The standard group name is GENHIST. When this command is executed, only the tables associated with the group identifier are extracted. If multiple products are installed, each can be controlled by separate SWITCH commands. This switching can be automated by using either an installation scheduling facility or an automation product. You can also use the OMEGAMON Platform advanced automation features to execute the SWITCH command. To do so, define a situation that, when it becomes true, executes the SWITCH command as the action. 68 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

69 8 Converting History Files to Delimited Flat Files (UNIX Systems) Introduction Data that has been warehoused cannot be extracted since the warehoused data is deleted from the persistent data store. To use these conversion procedures, you must have specified Off for the Warehouse option on the History Configuration panel for the CMW and on the History Collection Configuration dialog for CandleNet Portal. This chapter explains how the UNIX CandleHistory script is used to convert the saved historical data contained in the history data files to delimited flat files. You can use the delimited flat files in a variety of popular applications to easily manipulate the data to create reports and graphs. The procedure described in this chapter empties the history accumulation files, and must be performed periodically so that the history files do not take up needless amounts of disk space. Chapter Contents Understanding History Data Conversion Performing the History Data Conversion Converting History Files to Delimited Flat Files (UNIX Systems) 69

70 Understanding History Data Conversion Understanding History Data Conversion Overview In the UNIX environment, you use the CandleHistory script to activate and customize the conversion procedure used to turn selected binary historical data tables into a form usable by other software products. The historical data that is collected is in a binary format and must be converted to ASCII in order to be used by third party products. Each binary file is converted independently. The historical data collected by the Candle Management Server (CMS) may be at the host location of the CMS or at the location of the reporting agent. Conversion can be run at any time, whether or not the CMS or agents are active. Conversion applies to all history data collected under the current CANDLEHOME associated with a single CMS server, whether the data was written by the CMS or by a remote agent. Additional information about CandleHistory can be found in the online help. When you enter CandleHistory -h at the command line, this output displays: CandleHistory [ -h CANDLEHOME ] -C [ -L nnn[kb Mb] ] [ -t masks*,etc ] [ -D delim ] [ -H +H ] [ -N n ] [ -p cms_name ] prod_code CandleHistory -A? CandleHistory [ -h CANDLEHOME ] -A perday 0 [ -W days ] [ -L nnn[kb Mb] ] [ -t masks*,etc ] [ -D delim ] [ -H +H ] [ -N n ] [ -i instance -p cms_name ] prod_code Note: Certain parameters are required. The pipe symbol separating items denotes mutual exclusivity (for example, Kb Mb means enter either Kb or Mb, not both.) Typically entered as a single line at the UNIX command prompt. The parameters used with the script are documented in History conversion parameters on page 72: 70 Historical Data Collection Guide for OMEGAMON XE Products

71 Performing the History Data Conversion Performing the History Data Conversion Overview The CandleHistory script schedules the conversion of historical data to delimited flat files. Both the manual process to perform a one-time conversion and the conversion script that permits you to schedule automatic conversions are documented below. Important: The CandleHistory script must be executed from CANDLEHOME/bin. After the conversion has taken place, the resulting delimited flat file has the same name as the input history file with an extension that is a single numerical digit. For example, if the input history file table name is KOSTABLE, the converted file will be named KOSTABLE.0. The next conversion will be named KOSTABLE.1, and so on. Performing a one-time conversion To perform a one-time conversion process, type the following at the command prompt:./candlehistory -C prod_code Scheduling basic automatic history conversions Use CandleHistory to schedule automatic conversions via the UNIX cron facility. To schedule a basic automatic conversion, type the following at the command prompt:./candlehistory -A n prod_code where n is a number from This number specifies the number of times per day the data conversion program will run, rounded up to the nearest divisor of 24. The product code is required as well. For example, CandleHistory -A 7 ux means run history conversion every three hours. Converting History Files to Delimited Flat Files (UNIX Systems) 71

72 Performing the History Data Conversion Customizing your history conversion You can use the CandleHistory script to further customize your history collection by specifying additional options. For example, you can choose to convert files that are above a particular size limit that you have set. You can also choose to perform the history conversion on particular days of the week. The table that follows describes all of the history conversion parameters. Table 6. History conversion parameters -C Identifies this as an immediate one-time conversion call. Required. -A n Identifies this as a history conversion call. Required. Automatically run specified number of times per day; absence of -A means run conversion now. Value must be -A n, where n is 1-24, the number of runs per day, rounded up to the nearest divisor of 24. For example, -A 7 means run every three hours. -A 0 Cancels all automatic runs for tabless specified. -A? Lists automatic collection status for all tables. -W Day of the week (0=Sunday, 1=Monday, etc.). Can be a comma-delimited list of numbers or ranges thereof. For example, -W 1,3-5 means Monday, Wednesday, Thursday, and Friday. The default is Monday through Saturday (1-6). -H Exclude column headers. Default is "attribute". +H Include group (long table) names in column headers. Format is Group_desc.Attribute Default is attribute only. -L Only converts files whose size is over a specified number of Kb/Mb (suffix can be any of none, K, Kb, M, Mb with none defaulting to Kb). -h Override for the value of $CANDLEHOME -t List of tables or mask patterns delimited by commas, colons, or blanks. If the pattern has embedded blanks, it must be surrounded with quotes. -D Output delimiter to use. Default=tab character. Quote or escape blank: -D -N Keep generation 0-n of output (default 9). -i instance For agent instances (those not using the default queue manager). Directs the program to process historical data collected by the specified agent instance. For example, -i qm1 specifies the instance named qm1. -p cms_name Directs the program to process historical data collected by the specified CMS instead of the agent. Note: A product code of ms must be used with this option. The default action is to process data collected by prod_code agent. prod_code Two-character product code of the product from which historical data is to be converted. Refer to Installing and Setting up OMEGAMON Platform and CandleNet Portal on Windows and UNIX, for product codes. 72 Historical Data Collection Guide for OMEGAMON XE Products

73 9 Converting History Files to Delimited Flat Files (HP NonStop Kernel Systems) Introduction If you selected the option to warehouse data to an ODBC data base, that option is mutually exclusive with running the file conversion programs described in this chapter. To use these conversion procedures, you must have specified Off for the Warehouse option on the History Configuration panel for the CMW and on the History Collection Configuration dialog for CandleNet Portal. The history files collected using the rules established in the HDC Configuration program can be converted to delimited flat files for use in a variety of popular applications to easily manipulate the data and create reports and graphs. Use the krarloff program to manually invoke file conversion. For best results, you should schedule conversion to run every day. Support is provided for OMEGAMON XE for WebSphere MQ Configuration and for OMEGAMON XE for WebSphere MQ Monitoring running on the HP NonStop Kernel operating system (formerly Tandem). For information specific to OMEGAMON XE for WebSphere MQ Monitoring relating to historical data collection, see the Customizing Monitoring Options topic found in your version of the product documentation. Chapter Contents Conversion Process Converting History Files to Delimited Flat Files (HP NonStop Kernel Systems) 73

74 Conversion Process Conversion Process Overview When setting up the process that will convert the history files you have collected to delimited flat files, you can schedule the process manually by running the krarloff program. Parameters for the krarloff program are described in krarloff Parameters on page 59. Important: Candle recommends running history file conversion every 24 hours. Using krarloff on HP NonStop Kernel The history files are kept on the DATA subvolume, under the default <$VOL>.CCMQDAT. However, the location of the history files is dependent on where you start the monitoring agent. If you started the monitoring agent using STRMQA from the CCMQDAT subvolume, the files are stored on CCMQDAT. You can run krarloff from the DATA subvolume by entering the following: RUN <$VOL>.CCMQEXE.KRARLOFF <parameters> Note that CCMQDAT and CCMQEXE are defaults. During the installation process, you can assign your own names for these files. For a table listing the krarloff parameters, see krarloff Parameters on page 59. Attributes formatting Some attributes need to be formatted for display purposes. For example, floating point numbers that specify a certain number of precision digits to be printed to the left of a decimal point. These display formatting considerations are specified in product attribute files. When you use krarloff to roll off historical data into a text file, any attributes that require format specifiers as indicated in the attribute file are ignored. Only the raw number is seen in the rolled off history text file. Thus, instead of displaying 45.99% or 45.99, the number 4599 appears. 74 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

75 A Maintaining the Persistent Data Store (CT/PDS) Introduction The persistent data store (CT/PDS) runs in the same address space as the Candle Management Server (CMS). It provides the ability to record and retrieve tabular relational data on a 24 by 7 basis while maintaining indexes on the recorded data. This appendix describes the procedures you use to maintain the CT/PDS. See the configuration documentation for your product for instructions on configuring the persistent datastore. Note: For applications configured to run in the CMS address space, the Configure persistent data store step in the CMS product configuration is required. This step applies to z/os-based products and non-z/os-based products that enable historical data collection in this z/os CMS. Any started task associated with a product (including the CMS address space itself) that is running prior to configuring the CT/PDS, must be stopped. Chapter Contents About the Persistent Data Store Components of the CT/PDS Overview of the Automatic Maintenance Process Making Archived Data Available Exporting and Restoring Persistent Data Data Record Format of Exported Data Extracting CT/PDS Data to Flat Files Command Interface Maintaining the Persistent Data Store (CT/PDS) 75

76 About the Persistent Data Store About the Persistent Data Store Overview The persistent data store (CT/PDS) is used for writing and for retrieving historical data. The program is the server portion of a client/server application. The client code either provides data to be inserted into relational tables or make requests to retrieve the data. The CT/PDS acts as a subset of a database management system that is concerned only with the physical level of recording and retrieving data. The data being written to the persistent data store is organized by tables, groups, and datasets. Each table is assigned to a group. A group can have one or more datasets assigned to it. Normally, three datasets are assigned to each group. Groups can have multiple tables assigned to them, so it is not necessary to have a dataset for each table defined to the system. The assignment of tables, groups, and datasets are defined during configuration of your product. See the product configuration documentation for details. The CMS provides automatic maintenance for the datasets in the CT/PDS. Two procedures and one CLIST, located in &rhilev.&midlev.rkansam, provide the maintenance. Their default names are: KPDPROC1 KPDPROCC KPDPROC2 If you changed prefix KPDPROC during the configuration process, the suffixes remain 1, C, and 2, respectively. See Overview of the Automatic Maintenance Process on page 79. User ID when running the CT/PDS procedures The CT/PDS procedures run with the user ID of the person who installed the product. 76 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

77 Components of the CT/PDS Components of the CT/PDS Overview The components described below make up the CT/PDS. KPDMANE This is the primary executable program. It is a server for other applications running in the same address space. This program is designed to run inside the Engine address space as a separate subtask. Although it is capable of running inside the Engine, it does not make any use of Engine services. This is because the KPDMAIN program is also used in other utility programs that are intended to run in batch mode. This is the program that eventually starts the maintenance task when it does a switch and determines that no empty datasets are available. KPDUTIL This program is used primarily to initialize one or more datasets for CT/PDS use. The program simply attaches a subtask and starts the KPDMANE program in it. The DD statements used when this program is run dictate what control files are executed by the KPDMANE program. KPDARCH This program acts as a client CT/PDS program that pulls data from the specified dataset and writes it out to a flat file. The program attaches a subtask and starts up the KPDMANE program in it. The output data is still in an internal format, with all the index information excluded. KPDREST This program acts as a client CT/PDS program that reads data created by the KPDARCH program and inserts it back into a dataset in the proper format so that the CT/PDS can use it. This includes the re-building of index information. The program attaches a subtask and starts the KPDMANE program in it. KPDXTRA This is a client CT/PDS program that pulls data from a dataset and writes it to one or more flat files with all column data converted to EBCDIC and separated by tabs. This extracted data can easily be loaded into a DBMS or into spreadsheet programs such as Excel. As with the other client programs, a subtask is attached and the KPDMANE program will be loaded and executed in that environment. See Extracting CT/PDS Data to Flat Files on page 91. KPDDSCO This program communicates with the started task that is running the CT/PDS and send it commands to be executed. The typical command executed is the RESUME command to tell the CT/PDS that it can once again use a dataset. This program is capable of using two forms of communication. The older version acts as a client application to the CMS. This mode uses SNA to connect to the server and submit the command requests. The later version uses an SVC 34 to execute a modify command Maintaining the Persistent Data Store (CT/PDS) 77

78 Components of the CT/PDS to the proper started task. A secondary function of this program is to log information in a general log maintained in the CT/PDS tables. Operation of the CT/PDS The KPDMANE program invokes maintenance automatically in two places. The first is on startup when it is reading and processing every dataset it knows about. It looks at internal data to determine if the dataset is in a known and stable state. If not, it issues a RECOVER command. The second area is when it is recording information from applications onto an active dataset for a group. If it detects that it is running out of room on a write operation, it executes the SWITCH command internally. RECOVER Logic This code puts the dataset into a quiesce state and closes the file. Information is set up to request an ARCHIVE, INIT, and RESTORE operation to be performed by the maintenance procedures. An SVC 34 is issued for a START command on KPDPROC1 (or its overridden name). The command exits to the caller with the dataset unusable until a RESUME command is executed. SWITCH Logic The SWITCH command looks at all of the datasets assigned to the group and finds an empty one. Note that if no empty datasets are available, future attempts to write data to any dataset in the group will fail. Normally, an empty dataset will be found and it will be marked as the active dataset. A test is made on the dataset being deactivated (because it is full) to see if the EXTRACT option was specified. If so, the EXTRACT command for the dataset is executed. The next test is to check if there are any empty datasets in the current group. If not, the code finds the dataset with the oldest data and marks it for maintenance. With the latest release of the CT/PDS, the code checks to see if any of the maintenance options BACKUP, EXPORT, or EXTRACT were specified for this dataset. If not, the INITDS command is executed. Otherwise, the BACKUP command is executed. BACKUP Logic This code puts the dataset in a quiesce state and closes it. A test is made to see if the user specified either BACKUP or EXPORT for the dataset and set appropriate options for the started task. The options always include a request to initialize the dataset. An SVC 34 is issued to start the KPDPROC1 procedure. The code returns to the caller with the dataset unavailable until the RESUME command is executed. EXTRACT Logic This is similar to the BACKUP logic, except the only option specified is for an EXTRACT run with no initialization performed on the dataset. RESUME Logic This code opens the specified dataset name and verifies that it is valid. The dataset is taken out of the quiesce state and made once again available for activation during the next SWITCH operation. 78 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

79 Overview of the Automatic Maintenance Process Overview of the Automatic Maintenance Process Overview When a dataset becomes full, the CT/PDS selects an empty dataset to make it active. Once active, the CT/PDS checks to see if there are any more empty datasets. If there are no more empty datasets, maintenance is started on the oldest dataset, and data recording is suspended. Prior to launching the KPDPROC1 process, the CT/PDS checks to see if either the BACKUP function or the EXPORT function has been specified. If neither function has been specified, then the dataset is initialized within the CT/PDS started task and KPDPROC1 is not executed. The maintenance process consists of three files that are generated and tailored by the Configuration tool and invoked by the persistent data store. The files are: KPDPROC1 KPDPROC1 is a procedure that is started with an MVS START command. Limited information is passed to this started task which it uses to drive a CLIST in a TSO environment. The configuration tool creates this file and puts it into the RKANSAM library for each runtime environment (RTE) that has a CT/PDS component. This procedure must be copied to a system level procedure library so the command issued to start it can be found. The parameters passed to KPDPROC1 vary based on the version of the configuration tool and the CT/PDS. This document assumes the latest version is installed. There are three parameters passed to the started task. They are: HILEV This is the high level qualifier for the RTE that configured this version of the CT/PDS. It is obtained by extracting information from the DD statement that points to the CT/PDS control files. LOWLEV This is the low level qualifier for the sample library. It currently contains the RKANSAM field name. DATASET The fully qualified name of the dataset being maintained. It is possible to have a dataset name that does not match the high level qualifier specified in the first parameter. KPDPROCC KPDPROCC is the CLIST that is executed by the procedure KPDPROC1 procedure. The CLIST has the task of obtaining all of the information needed to perform the maintenance and to submit a job to execute the desired maintenance. Maintaining the Persistent Data Store (CT/PDS) 79

80 Overview of the Automatic Maintenance Process KPDPROC2 KPDPROC2 is the actual JOB that gets executed to save the data and to initialize the dataset so it can be once again used by the CT/PDS. This procedure: backs up the data deletes the dataset allocates a new dataset with the same parameters as before makes the new dataset available for reading and writing The configuration tool allows the user to pick the first seven characters of the maintenance procedure names. The KPDPROC is the default if the user does not modify it. What part of maintenance do you control? Most of the CT/PDS maintenance procedure is automatic and does not require your attention. Through the configuration tool, you have already specified the EXTRACT, BACKUP and EXPORT options by indicating a Y or N for each dataset group. See Command Interface on page 94 for descriptions of additional commands that are used primarily for maintenance. BACKUP makes an exact copy of the dataset being maintained. EXPORT writes the data to a flat file in an internal format that can be used by external programs to post process the data. This is also used for recovery purposes when the CT/PDS detects potential problems with the data. EXTRACT writes the data to a flat file in human readable form which is suitable for loading into other DBMS systems. If none of the maintenance options are specified, the data within the dataset being maintained is erased. You can indicate whether to: back up the data for each dataset group back up the data to tape or to DASD for all dataset groups Indicating dataset backup to tape or to DASD For all dataset groups that you selected to back up, you must indicate whether you want to back up the data to tape or to DASD. This decision will apply to all datasets. Table 7. Determining the medium for dataset backup tape DASD If you are backing up datasets to... THEN... use KPDPROC2 as shipped follow the procedure below 80 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

81 Overview of the Automatic Maintenance Process Backing up datasets to DASD Use this procedure to modify KPDPROC2: 1. Access the procedure in &rhilev.&midlev.rkansam(kpdproc2) with any editor. 2. Remove the comment characters from the step that backs up datasets to DASD and insert comment characters in the step that backs up datasets to tape. 3. Save the procedure. 4. Copy procedure KPDPROC2 to your system procedure library, usually SYS1.PROCLIB. Naming the export datasets When you choose to export data, you are requesting to write data to a sequential dataset. The names of all exported datasets follow the format where: &rhilev.&midlev.&dsnlolev.a####### &rhilev is the high-level qualifier of all datasets in the CT/PDS, as you specified in the CICAT &midlev is the mid-level qualifier of all datasets in the CT/PDS, as you specified in the CICAT &dsnlolev is the low-level qualifier of the dataset names as set by the CICAT A is a required character nnnnnnn is a sequential number Maintaining the Persistent Data Store (CT/PDS) 81

82 Making Archived Data Available Making Archived Data Available Overview This topic shows you how to make data available to those products that use the CT/PDS after the data has been backed up to DASD or to tape. To make the data available you will dynamically restore a connection between an archived dataset and the CMS. When the automatic maintenance facility backs up a dataset in the persistent data store, it performs the following activities: disconnects the dataset from the CMS copies the dataset to tape or DASD in a format readable by the CMS deletes and reallocates the dataset reconnects the empty dataset to the CMS To view archived data from the product, you must ensure that the dataset is stored on an accessible DASD volume and reconnect the dataset to the CMS. Dataset naming conventions When the maintenance facility backs up a dataset, it uses the following format to name the dataset: where: &rhilev.&midlev.&dsnlolev.b####### &rhilev is the high-level qualifier of all datasets in the CT/PDS, as you specified during configuration &midlev is the mid-level qualifier of all datasets in the CT/PDS, as you specified during configuration &dsnlolev is the low-level qualifier of the dataset names as set by the configuration tool B is a required character nnnnnnn is a sequential number 82 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

83 Making Archived Data Available Prerequisites Before you begin to restore the connection between the archived dataset and the CMS, you will need the following information: the name of the archived dataset that contains the data you want to view. Your systems programmer can help you locate the name of the dataset. the name of the CT/PDS group that corresponds to the data you want to view Finding background information You can use the Installation and Configuration Assistance Tool to find the name of the CT/PDS group to which the archived dataset belongs by following this procedure: 1. Stop the CMS if it is running. 2. Log onto a TSO session and invoke ISPF. 3. At the ISPF Primary Option menu, enter 6 in the Option field to access the TSO command mode. 4. At the TSO command prompt, type: EX 'shilev.instlib' where shilev is the high-level qualifier of the configuration tool installation library at your site. The configuration tool first displays the copyright panel and then the Main Menu. 5. From the Main Menu, select Configure products and then Select product to configure. 6. From the Product Selection Menu, select the product. 7. On the Runtime Environments (RTEs) panel, specify C to select the RTE where the product you configured resides. 8. On the Configure product panel, select Configure persistent data store and then Modify and review data store specifications. 9. Locate the low-level qualifier of the dataset you want to reconnect and note the corresponding group name. 10. Press F3 until you exit the Configuration tool. Connecting the dataset to the CMS To reconnect the archived dataset to the CMS so you can view the data from the product, follow this procedure: 1. If the dataset resides on tape, use a utility such as IEBGENR to copy the dataset to a DASD volume that is accessible by the CMS. 2. Copy job KPDCOMMJ from &hilev.tkansam to &rhilev.&midlev.rkansam. 3. Access job &rhilev.&midlev.rkansam(kpdcommj) with any editor. Maintaining the Persistent Data Store (CT/PDS) 83

84 Making Archived Data Available 4. Substitute site-specific values for the variables in the job, as described in the comments at the beginning of the job. In addition to the comments in the job, you may find the following information helpful: Variable &GROUP on the COMM ADDFILE statement is the group name that you identified in Finding background information on page 83. Variable &PDSN on the COMM ADDFILE statement is the name of the dataset you want to reconnect. 5. Locate the COMM ADDFILE statement near the bottom of the job and remove the comment character (*). 6. Submit KPDCOMMJ to restore the connection between the dataset you specified and the CMS. 7. To verify that the job ran successfully, you can view a report in RKPDLOG that lists all the persistent data store datasets that are connected to the CMS. RKPDLOG is the ddname of a SYSOUT file allocated to the CMS. Locate the last ADDFILE statement in the log and examine the list of datasets that follows the statement. If the job ran successfully, the name of the dataset you reconnected will appear in the list. Disconnecting the dataset The dataset that you connected to the CMS is not permanently connected. The connection will automatically be removed the next time the CMS terminates. If you wish to remove the dataset from the CT/PDS immediately after you view the data, follow this procedure: 1. Access job &rhilev.&midlev.rkansam(kpdcommj) with any editor. 2. Retain all site-specific values that you entered when you modified the job to reconnect the dataset in the previous procedure. 3. Locate the COMM ADDFILE statement near the bottom of the job and perform the following steps, if needed: A. Remove the comment character from the statement, if one exists. B. Overtype the word ADDFILE with the word DELFILE. C. Remove the Group parameter together with its value. D. Remove the RO parameter if it exists. 4. Submit KPDCOMMJ to remove the connection between the dataset and the CMS. To verify that the job ran successfully, you can view a report in RKPDLOG that lists all datasets connected to the CMS. Locate the last DELFILE statement in the log and examine the list of datasets that follows the statement. If the job ran successfully, the name of the dataset you disconnected will not appear in the list. 5. If the dataset resides on tape, you may want to conserve space by deleting the dataset from DASD. 84 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

85 Exporting and Restoring Persistent Data Exporting and Restoring Persistent Data Overview In addition to the standard maintenance jobs used by the persistent data store, there are sample jobs distributed with the CMS that you can use to export data to a sequential file and then restore the data to the original indexed format. These jobs are not tailored by the configuration tool at installation time and must be modified to add pertinent information. Exporting persistent data Follow this procedure to export persistent data to a sequential file: 1. Stop the CMS if it is running. 2. Copy &thilev.&midlev.rkansam(kpdexptj). 3. Update the jobcard with the following values: &rhilev &pdsn &expdsn &unit2 &ssz &sct &bsz high-level qualifier of the runtime environment where the CT/PDS resides. fully qualified name of the CT/PDS dataset to be exported fully qualified name of the export file you are creating DASD unit identifier for &expdsn record length of output file (You can use the same record length as defined for &pdsn.) count of blocks to allocate (You can use the same size as the blocks allocated for &pdsn.) &ssz value plus eight With the exception of &pdsn, these values can be found in the PDSLOG SYSOUT of the CMS started task. 4. Submit the job. Maintaining the Persistent Data Store (CT/PDS) 85

86 Exporting and Restoring Persistent Data Restoring exported data Follow this procedure to restore a previously exported CT/PDS dataset. 1. Copy &thilev.&midlev.rkansam(kpdrestj). 2. Update the jobcard with the following values: &rhilev &pdsn &expdsn &unit2 &group &siz high-level qualifier of the runtime environment where the CT/PDS resides. fully qualified name of the CT/PDS dataset to be restored fully qualified name of the file you are creating DASD unit identifier for &expdsn identifier for the group that the dataset will belong to size of the dataset to be allocated, in megabytes With the exception of &pdsn, these values can be found in the PDSLOG SYSOUT of the CMS started task. 3. Submit the job. 86 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

87 Data Record Format of Exported Data Data Record Format of Exported Data Overview This section describes the format of the dictionary entries but not its contents. The actual meaning of the tables and columns is product-specific. Due to the nature of the data being recorded, the format of a dataset is complex. A single dataset contains descriptions for every table that was recorded in the original data set, therefore mapping information in the form of a data dictionary is provided for every table. In many cases, the tables can have variable length columns as well as rows of data where some of the columns are not available. The information about missing columns and lengths for variable columns are imbedded in the data records. Some tables have columns that physically overlay each other. This must be taken into account when trying to obtain data for these overlays. Data in the exported file is kept in internal format which means that many of the fields will be in binary. The output file is made up of three sections with one or more data rows within each. Section 1 describes general information about the data source used to create the exported data. Section 2 contains a dictionary needed to map out the data. Section 3 contains the actual data rows. The historical data is maintained in relational tables, therefore the dictionary mappings provide table and column information for every table that had data recorded for it in the CT/PDS. Section 1 The Section 1 record is not needed to map out the data within the exported file. However, it is useful for determining how to re-allocate a dataset when a CT/PDS file needs to be reconstructed. Section 1 contains a single data row used to describe information about the source of the data recorded in the export file. The data layout for the record is: Table 8. Section 1 Data Record Format Field Offset Length Type Description RecID 0 4 Char Record ID. Contains AA10 for header record 1. Length 4 4 Binary Contains the record length of the header record. Timestamp 8 16 Char Timestamp of export. Format: CYYMMDDHHMMSSMMM Group 24 8 Char Group name to which the data belongs. Data Store Ver 32 8 Char Version of KPDMANE used to record original data. Maintaining the Persistent Data Store (CT/PDS) 87

88 Data Record Format of Exported Data Table 8. Section 1 Data Record Format (continued) Field Offset Length Type Description Export Version 40 8 Char Version of KPDARCH used to create exported file. Total Slots 48 4 Binary Number of blocks allocated in original dataset. Used Slots 52 4 Binary Number of used blocks at time of export. Slot Size 56 4 Binary Block size of original dataset. Expansion Area Unused area. Data Store Path Char Name of originating dataset. Export Path Char Name of exported dataset. Section 2 Records Section 2 provides information about the tables and columns that are represented in Section 3. This section has a header record followed by a number of table and column description records. Dictionary Header Record This is the first Section 2 record (and therefore the second record in the dataset). It provides general information about the format of the dictionary records that follow. It is used to describe how many tables are defined in the dictionary section. The data layout for the dictionary header record is: Table 9. Section 2 Data Record Format Field Offset Length Type Description RecID 0 4 Char Record ID. Contains DD10 for header record 2. Dictionary Len 4 4 Binary Contains the length of the entire dictionary. Header Len 8 4 Binary Length of the header record. Table Count 12 4 Binary Number of tables in dictionary (1 record per table). Column Count 16 4 Binary Total number of columns described. Table Row Len 20 4 Binary Size of table row. Col Row Len 24 4 Binary Size of column row. Expansion Unused area. Table description record Each table within the exported dataset has a table record that provides its name, identifier, and additional information about the columns. All table records are provided before the first column record. The column records and all of the data records in section 3 use the identifier number to associate it with the appropriate table. 88 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

89 Data Record Format of Exported Data The map length and variable column count fields can be used to determine exactly where the data for each column starts and to properly determine if the column exists in a record. The format of the table description record is described in the table that follows. Table 10. Section 2 Table Description Record Field Offset Length Type Description RecID 0 4 Char Record ID. Contains DD20 for table record. Identifier Num 4 4 Binary Unique number for this table. Application 8 8 Char Application name table belongs to. Table Name Char Table name. Table Version 26 8 Char Table version. Map Length 34 2 Binary Length of the mapping area. Column Count 16 4 Binary Count of columns in the table. Variable Cols 36 4 Binary Count of variable name columns. Row Count 40 4 Binary Number of rows in exported file for this table. Oldest Row Char Timestamp for oldest row written for this table. Newest Row Char Timestamp for newest row written for this table. Expansion Unused area. Column description record One record exists for every column in the associated table record. Each record provides the column name, type, and other characteristics. The order of the column column rows is the same order in which the columns appear in the output row. However, some columns may be missing on any given row. The mapping structure defined under section 3 must be used to determine if a column is present. The format of the column records is: Table 11. Section 2 Column Description Record Field Offset Length Type Description RecID 0 4 Char Record ID. Contains DD30 for table record. Table Ident 4 4 Binary Identifier for the table this column belongs to. Column Name 8 10 Char Column name. SQL Type 18 2 Char SQL type for column. Column Length 20 4 Binary Maximum length of this column s data. Flag 24 1 Binary Flag byte. Spare Unused. Overlay Col ID 26 2 Char Column number if this is an overlay. Maintaining the Persistent Data Store (CT/PDS) 89

90 Data Record Format of Exported Data Table 11. Section 2 Column Description Record (continued) Field Offset Length Type Description Overlay Col Off 28 2 Char Offset into row for start of overlay column. Alignment Unused. Spare Unused. Section 3 records Section 3 has one record for every row of every table that was in the original CT/PDS dataset being exported. Each row starts with a fixed portion followed by the actual data associated with the row. The length of the column map can be obtained from the table record (DD20). Each bit in the map represents one column. A 0 for the bit position indicates that the column data is not present while a 1 indicates that data exists in this row for the column. Immediately following the column map field is an unaligned set of 2-byte length fields. One of these length fields exists for every variable length column in the table. This mapping information must be used to determine where the starting location for any given column is within the data structure. The actual data starts immediately after the last length field. If dealing with overlay columns, use the column offset defined in the DD30 records to determine the starting location for this type of column. Typically, we do not worry about overlaid columns with extracted data. If you have a real need to look at the actual content of an overlaid column, you will need to expand the data by re-inserting any missing columns and expanding all variable length columns to the maximum length before doing the mapping. The table that follows maps the fixed portion of the data. Table 12. Section 3 Record Format Field Offset Length Type Description RecID 0 4 Char Record ID. Contains ROW1 for column record. Table Ident 4 4 Binary Identifier for the table this record belongs to. Row Length 8 4 Binary Total length of this row. Data Offset 12 4 Binary Offset to start of data. Data Length 16 4 Binary Length of data portion of row. Column Map 20 Varies Binary Column available map plus variable length fields. 90 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

91 Extracting CT/PDS Data to Flat Files Extracting CT/PDS Data to Flat Files Overview This topic explains how to extract data from a CT/PDS dataset into a flat file in EBCDIC format. This information can be loaded into spreadsheets or databases. The format of the data is converted to tab delimited columns. The data is written to separate files for each table, therefore the data format for all rows in each dataset is consistent. The program also generates a separate file. This file contains a single row that provides the column names in the order in which the data is organized. This file is also delimited for ease of use. An option (NOFF) on the KPDXTRA program bypasses creating the separate file and places the column information as the first record of the data file. This job is not tailored by the configuration tool at installation time and must be modified to add pertinent information. The output from this job is written to files with the following naming standard: &pref.xymmdd.tablename where: &pref is the high-level qualifier that you designate for the output files x is D for data output or F for format output ymmdd is the year (y), month (mm), and day (dd) on which the KPDXTRA job is run tablename is the identifier for the table being extracted. It is recommended that this name be no more than eight characters. If this job is run more than once on a given day, data is appended to any data previously extracted for that day. In Version 300 and later, all datasets are kept in read/write state even if they are not active. This makes the datasets unavailable if the CMS is running. That is, jobs cannot be run against the active datasets and the inactive datasets must be taken offline. You can dynamically remove a dataset from the CMS by issuing the modify command: F stcname,kpdcmd QUIESCE FILE=DSN:dataset If you must run a utility program against an active data store, issue a SWITCH command prior to issuing this QUIESCE command. Maintaining the Persistent Data Store (CT/PDS) 91

92 Extracting CT/PDS Data to Flat Files Extracting CT/PDS data to EBCDIC files Use this job to extract CT/PDS data to EBCDIC files. 1. Copy &thilev.&midlev.rkansam(kpdxtraj). 2. Update the jobcard with the following values: &rhilev &pdsn &pref high-level qualifier of the runtime environment where the CT/PDS resides. fully qualified name of the CT/PDS dataset to be extracted high-level qualifier for the extracted data 3. Add the parameters you want to use for this job PREF= DELIM=nn NOFF= QUOTES identifies the high-level qualifier for the output file. This field is required. identifies the separator character to be placed between columns. The default is 05. if used, causes the format file not to be generated. The column names will be placed into the data file as the first record. use to place quotes around character type of data 4. Submit the job. Extracted data format Header Record The following is a sample extract header file record: TMZDIFF(int,0,4) WRITETIME(char,1,16) ORIGINNODE(char,2,128) QMNAME(char,3,48) APPLID(char,4,12) APPLTYPE(int,5,4) SDATE_TIME(char,6,16) HOST_NAME(char,7,48) CNTTRANPGM(int,8,4) MSGSPUT(int,9,4) MSGSREAD(int,10,4) MSGSBROWSD(int,11,4) INSIZEAVG(int,12,4) OUTSIZEAVG(int,13,4) AVGMQTIME(int,14,4) AVGAPPTIME(int,15,4) COUNTOFQS(int,16,4) AVGMQGTIME(int,17,4) AVGMQPTIME(int,18,4) DEFSTATE(int,19,4) INT_TIME(int,20,4) INT_TIMEC(char,21,8) CNTTASKID(int,22,4) SAMPLES(int,23,4) INTERVAL(int,24,4) Each field is separated by a tab character (by default). The data consists of the column name with a type, column number, and column length field within the parenthesis for each column. The information within parenthesis is used primarily to describe the internal formatting information, and therefore can be ignored. Data Record Each record in the data file for the above header contains data that looks like the following: 0 " " "MQM7:SYSG:MQESA" "MQM7" "XCXS2DPL" 2 " " "SYSG" "016: 01" Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

93 Extracting CT/PDS Data to Flat Files Using the header file and the data file will match up as follows: TMZDIFF 0 Integer WRITETIME " "Character ORIGINNODE "MQM7:SYSG:MQESA "Character QMNAME "MQM7 "Character SAMPLES 1 Integer INTERVAL 900 Integer Maintaining the Persistent Data Store (CT/PDS) 93

94 Command Interface Command Interface Overview The CT/PDS uses a command interface to perform many of the tasks needed to maintain the datasets used for historical data. Most of these commands can be invoked externally through a command interface supported in the Engine environment. These commands can be executed using the standard MVS MODIFY interface with the following format: where F stcname,kpdcmd command arguments stcname command arguments Started task name of address space where the CT/PDS is running. One of the supported dynamic commands. Valid arguments to the specified command. Commands Many commands are supported by the CT/PDS. The commands described below are used primarily for maintenance. SWITCH command This dynamic command causes a data store file switch for a specific file group. At any given time, update-type operations against tables in a particular group are directed to one and only one of the files in the group. That one file is called the "active" file. A file switch changes the active file for a group. In other words, the switch causes a file other than the currently active one to become the new active file. If the group specified by this command has only one file, or the group currently has no inactive file that is eligible for output, the switch is not performed. At the conclusion of a switch, CT/PDS starts the maintenance process for a file in the group if no empty files remain in the group. The [NO]EXTRACT keyword may be used to force or suppress an extract job for the data store file deactivated by the switch. Syntax: SWITCH GROUP=groupid [ EXTRACT NOEXTRACT ] where groupid Specifies the id of the file group that is to be switched. The group must have multiple files assigned to it. EXTRACT: Specifies that the deactivated data store file should be extracted, even if the file's GROUP statement did not request extraction. 94 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

95 Command Interface NOEXTRACT: Specifies that extraction should not be performed for the deactivated data store file. This option overrides the EXTRACT keyword of the GROUP statement. Note that if neither EXTRACT nor NOEXTRACT is specified, the presence or absence of the EXTRACT keyword on the file's GROUP statement determines whether extraction is performed as part of the switch. BACKUP This command causes a maintenance task to be started for the data store file named on the command. The maintenance task typically deletes, allocates and initializes a data store file, optionally backing up or exporting the file before deleting it. (The optional export and backup steps are requested via parameters on the data store file's GROUP command in the RKPDIN file.) Syntax: BACKUPFILE=DSN:dsname where dsname: Specifies the physical dataset name of the file that is to be maintained. ADDFILE command This command is used to dynamically assign a new physical data store file to an existing file group. The command can be issued any time after the CT/PDS initialization has completed in the CMS. It can be used to increase the number of files assigned to a group or to bring old data back online. It cannot, however, be used to define a new file group ID. It may be used to add files only to groups that already exist as the result of GROUP commands in the RKPDIN input file. Syntax: where ADDFILE GROUP=groupid FILE=DSN:dsname [ RO ] [ BACKUP ] [ ARCHIVE ] groupid: dsname: RO: Specifies the unique group id of the file group to which a file is to be added. Specifies the fully-qualified name (no quotes) of the physical dataset that is to be added to the group specified by groupid. Specifies that the file is to be read-only (that is, that no new data may be recorded to it). By default, files are not read-only (that is, they are modifiable). This parameter may also be specified as READONLY. Maintaining the Persistent Data Store (CT/PDS) 95

96 Command Interface BACKUP: ARCHIVE: Specifies that the file is to be copied to disk or tape before being reallocated by the automatic maintenance task. (Whether the copy is to disk or tape is a maintenance process customization option.) By default, files are not backed up during maintenance. Specifies that the file is to be exported before being reallocated by the automatic maintenance task. By default, files are not exported during maintenance. DELFILE command This command is used to drop one physical data store file from a file group's queue of files. It can be issued any time after CT/PDS initialization has completed in the CMS. The file to be dropped must be full, partially full, or empty; it cannot be the "active" (output) file for its group (if it is, the DELFILE command will be rejected as invalid). The DELFILE command is conceptually the opposite of the ADDFILE command, and is intended to be used to manually drop a file that was originally introduced by a GROUP or ADDFILE command. Once a file has been dropped by DELFILE, it is no longer allocated to the CMS task and may be allocated by other tasks. Note that DELFILE does not physically delete a file or alter it in any way. To physically delete and un-catalog a file, use the REMOVE command. Syntax: DELFILE where dsname: FILE=DSN:dsname Specifies the fully-qualified (without quotes) name of the file that is to be dropped. EXTRACT command This command causes an extract job to be started for the data store file named on the command. The job converts the table data in the data store file to delimited text format in new files, then signals the originating CMS to resume use of the data store file. For each table extracted from the data store file, two new files are created. One file contains the converted data and one file contains a record describing the format of each row in the first file. Syntax: EXTRACT where dsname: FILE=DSN:dsname Specifies the physical dataset name of the file to have its data extracted. INITDS command This command forces a data store file to be initialized within the address space where the CT/PDS is running. Syntax: 96 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

97 Command Interface where INITDS dsname: FILE DSN:dsname Identifies the data set name of the data store file to be initialized. RECOVER command This command causes a recovery task to be started for the data store file named on the command. The recovery task attempts to repair a corrupted data store file by exporting it, reallocating and initializing it, and restoring it. The restore operation rebuilds the index information, the data most likely to be corrupted in a damaged file. The recovery is not guaranteed to be successful, however; some severe forms of data corruption are unrecoverable. Syntax: RECOVER where dsname: FILE=DSN:dsname Specifies the physical name of the dataset to be recovered. RESUME command The RESUME command is used to notify the CT/PDS that it can once again make use of the dataset specified in the arguments. The file identified must be one that was taken offline by the backup, recover, or extract commands. Syntax: RESUME where dsname: FILE=DSN:dsname Specifies the physical name of the dataset to be brought online. Other Useful Commands QUERY CONNECT command The QUERY CONNECT command displays a list of applications and tables that are currently defined in the CT/PDS. The output of this command shows the application names, table names, total number of rows recorded for each table, the group the table belongs to, and the current dataset that the data is being written to. Syntax: QUERY CONNECT <ACTIVE> where ACTIVE - Optional parameter that only displays those tables that are active. An active table is one that has been defined and assigned to an existing group, and the group has datasets assigned to it. Maintaining the Persistent Data Store (CT/PDS) 97

98 Command Interface DATASTORE command The QUERY DATASTORE command displays a list of datasets known to the CT/PDS. For each dataset, the total number of allocated blocks, the number of used blocks, the number of tables that have data recorded, the block size, and status are displayed. Syntax: QUERY DATASTORE <FILE=DSN:datasetname> where FILE - Optional parameter that allows you to specify that you are only interested in the details for a single dataset. When this option is used, the resulting display is changed to show information that is specific to the tables being recorded in the dataset. COMMIT command This dynamic command flushes to disk all pending buffered data. For performance reasons, CT/PDS does not immediately write to disk every update to a persistent table. Updates are buffered in virtual storage. Eventually the buffered updates are "flushed" (that is, written to disk) at an optimal time. However, this architecture makes it possible for persistent data store files to become "corrupted" (invalid) if the files are closed prematurely, before pending buffered updates have been flushed. Such premature closings may leave inconsistent information in the files. The known circumstances that may cause corruption are: Severe abnormal CMS terminations that prevent the CT/PDS recovery routines from executing IPLs performed without first stopping the CMS The COMMIT command is intended to limit the exposure to data store file corruption. Some applications automatically issue this command after inserting data. Syntax: COMMIT 98 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

99 B Support Information If you have a problem with your IBM software, you want to resolve it quickly. This section describes the following options for obtaining support for IBM software products: Searching knowledge bases on page 99 Obtaining fixes on page 100 Receiving weekly support updates on page 100 Contacting IBM Software Support on page 101 Searching knowledge bases You can search the available knowledge bases to determine whether your problem was already encountered and is already documented. Searching the information center IBM provides extensive documentation that can be installed on your local computer or on an intranet server. You can use the search function of this information center to query conceptual information, instructions for completing tasks, and reference information. Searching the Internet If you cannot find an answer to your question in the information center, search the Internet for the latest, most complete information that might help you resolve your problem. To search multiple Internet resources for your product, use the Web search topic in your information center. In the navigation frame, click Troubleshooting and support > Searching knowledge bases and select Web search. From this topic, you can search a variety of resources, including the following: IBM technotes IBM downloads IBM Redbooks IBM developerworks Forums and newsgroups Google Support Information 99

100 Obtaining fixes A product fix might be available to resolve your problem. To determine what fixes are available for your IBM software product, follow these steps: 1. Go to the IBM Software Support Web site at ( 2. Click Downloads and drivers in the Support topics section. 3. Select the Software category. 4. Select a product in the Sub-category list. 5. In the Find downloads and drivers by product section, select one software category from the Category list. 6. Select one product from the Sub-category list. 7. Type more search terms in the Search within results if you want to refine your search. 8. Click Search. 9. From the list of downloads returned by your search, click the name of a fix to read the description of the fix and to optionally download the fix. For more information about the types of fixes that are available, IBM Software Support Handbook at Receiving weekly support updates To receive weekly notifications about fixes and other software support news, follow these steps: 1. Go to the IBM Software Support Web site at 2. Click My Support in the upper right corner of the page. 3. If you have already registered for My Support, sign in and skip to the next step. If you have not registered, click register now. Complete the registration form using your address as your IBM ID and click Submit. 4. Click Edit Profile. 5. In the Products list, select Software. A second list is displayed. 6. In the second list, select a product segment, for example, Application servers. A third list is displayed. 7. In the third list, select a product sub-segment, for example, Distributed Application & Web Servers. A list of applicable products is displayed. 8. Select the products for which you want to receive updates, for example, IBM HTTP Server and WebSphere Application Server. 9. Click Add products. 10. After selecting all products that are of interest to you, click Subscribe to on the Edit profile tab. 11. Select Please send these documents by weekly Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

101 12. Update your address as needed. 13. In the Documents list, select Software. 14. Select the types of documents that you want to receive information about. 15. Click Update. If you experience problems with the My support feature, you can obtain help in one of the following ways: Online: Send an message to describing your problem. By phone: Call IBM-4You ( ). Contacting IBM Software Support IBM Software Support provides assistance with product defects. Before contacting IBM Software Support, your company must have an active IBM software maintenance contract, and you must be authorized to submit problems to IBM. The type of software maintenance contract that you need depends on the type of product you have: For IBM distributed software products (including, but not limited to, Tivoli, Lotus, and Rational products, as well as DB2 and WebSphere products that run on Windows or UNIX operating systems), enroll in Passport Advantage in one of the following ways: Online: Go to the Passport Advantage Web page ( Passport_Advantage_Home) and click How to Enroll By phone: For the phone number to call in your country, go to the IBM Software Support Web site at and click the name of your geographic region. For customers with Subscription and Support (S & S) contracts, go to the Software Service Request Web site at For customers with IBMLink, CATIA, Linux, S/390, iseries, pseries, zseries, and other support agreements, go to the Support Line Web site at For IBM eserver software products (including, but not limited to, DB2 and WebSphere products that run in zseries, pseries, and iseries environments), you can purchase a software maintenance agreement by working directly with an IBM sales representative or an IBM Business Partner. For more information about support for eserver software products, go to the IBM Technical Support Advantage Web site at If you are not sure what type of software maintenance contract you need, call IBMSERV ( ) in the United States. From other countries, go to the contacts page of the IBM Software Support Handbook on the Web at Support Information 101

102 and click the name of your geographic region for phone numbers of people who provide support for your location. To contact IBM Software Support, follow these steps: 1. Determining the business impact on page Describing problems and gathering information on page Submitting problems on page 103 Determining the business impact When you report a problem to IBM, you are asked to supply a severity level. Therefore, you need to understand and assess the business impact of the problem that you are reporting. Use the following criteria: Severity 1 Severity 2 Severity 3 Severity 4 The problem has a critical business impact. You are unable to use the program, resulting in a critical impact on operations. This condition requires an immediate solution. The problem has a significant business impact. The program is usable, but it is severely limited. The problem has some business impact. The program is usable, but less significant features (not critical to operations) are unavailable. The problem has minimal business impact. The problem causes little impact on operations, or a reasonable circumvention to the problem was implemented. Describing problems and gathering information When explaining a problem to IBM, be as specific as possible. Include all relevant background information so that IBM Software Support specialists can help you solve the problem efficiently. To save time, know the answers to these questions: What software versions were you running when the problem occurred? Do you have logs, traces, and messages that are related to the problem symptoms? IBM Software Support is likely to ask for this information. Can you re-create the problem? If so, what steps were performed to re-create the problem? Did you make any changes to the system? For example, did you make changes to the hardware, operating system, networking software, and so on. Are you currently using a workaround for the problem? If so, be prepared to explain the workaround when you report the problem. What software versions were you running when the problem occurred? 102 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

103 Submitting problems You can submit your problem to IBM Software Support in one of two ways: Online: Click Submit and track problems on the IBM Software Support site at Type your information into the appropriate problem submission form. By phone: For the phone number to call in your country, go to the contacts page of the IBM Software Support Handbook ( and click the name of your geographic region. If the problem you submit is for a software defect or for missing or inaccurate documentation, IBM Software Support creates an Authorized Program Analysis Report (APAR). The APAR describes the problem in detail. Whenever possible, IBM Software Support provides a workaround that you can implement until the APAR is resolved and a fix is delivered. IBM publishes resolved APARs on the Software Support Web site daily, so that other users who experience the same problem can benefit from the same resolution. Support Information 103

104 104 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

105 C Notices Overview This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-ibm product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing IBM Corporation North Castle Drive Armonk, NY U.S.A. For license inquiries regarding double-byte (DBCS) information, contact the IBM Intellectual Property Department in your country or send inquiries, in writing, to: IBM World Trade Asia Corporation Licensing 2-31 Roppongi 3-chome, Minato-ku Tokyo 106, Japan The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement might not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in Notices 105

106 new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-ibm Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Licensees of this program who wish to have information about it for the purpose of enabling: (i) the exchange of information between independently created programs and other programs (including this one) and (ii) the mutual use of the information which has been exchanged, should contact: IBM Corporation 2Z4A/ Burnet Road Austin, TX U.S.A. Such information may be available, subject to appropriate terms and conditions, including in some cases payment of a fee. The licensed program described in this document and all licensed material available for it are provided by IBM under terms of the IBM Customer Agreement, IBM International Program License Agreement or any equivalent agreement between us. Any performance data contained herein was determined in a controlled environment. Therefore, the results obtained in other operating environments may vary significantly. Some measurements may have been made on development-level systems and there is no guarantee that these measurements will be the same on generally available systems. Furthermore, some measurement may have been estimated through extrapolation. Actual results may vary. Users of this document should verify the applicable data for their specific environment. Information concerning non-ibm products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-ibm products. Questions on the capabilities of non-ibm products should be addressed to the suppliers of those products. All statements regarding IBM's future direction or intent are subject to change or withdrawal without notice, and represent goals and objectives only. All IBM prices shown are IBM's suggested retail prices, are current and are subject to change without notice. Dealer prices may vary. This information is for planning purposes only. The information herein is subject to change before the products described become available. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any 106 Historical Data Collection Guide for IBM Tivoli OMEGAMON XE Products

107 similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrate programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy, modify, and distribute these sample programs in any form without payment to IBM for the purposes of developing, using, marketing, or distributing application programs conforming to IBM s application programming interfaces. Each copy or any portion of these sample programs or any derivative work, must include a copyright notice as follows: (your company name) (year). Portions of this code are derived from IBM Corp. Sample Programs. Copyright IBM Corp. _enter the year or years_. All rights reserved. If you are viewing this information in softcopy form, the photographs and color illustrations might not display. Trademarks IBM, the IBM logo, AS/400, Candle, Candle Management Server, Candle Management Workstation, CandleNet, CandleNet Portal, DB2, developerworks, eserver, IBMLink, iseries, Lotus, Lotus Notes, MVS, OMEGAMON, OMEGAMON Monitoring Agent, OS/400, Passport Advantage, pseries, Rational, Redbooks, S/390, Tivoli, the Tivoli logo, VTAM, z/os, and zseries are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. Intel, Intel Inside (logos), MMX, Celeron, Intel Centrino, Intel Xeon, Itanium, Pentium and Pentium III Xeon are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States, other countries, or both. Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Linux is a trademark of Linus Torvalds in the United States, other countries, or both. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Notices 107

IBM. Candle OMEGAMON Platform. Configuring IBM Tivoli Candle Management Server on z/os. Tivoli. Version 360 GC

IBM. Candle OMEGAMON Platform. Configuring IBM Tivoli Candle Management Server on z/os. Tivoli. Version 360 GC Tivoli Candle OMEGAMON Platform IBM Version 360 Configuring IBM Tivoli Candle Management Server on z/os GC32-9414-02 12 1 2 Tivoli Candle OMEGAMON Platform IBM Version 360 Configuring IBM Tivoli Candle

More information

IBM. IBM Tivoli OMEGAMON XE for WebSphere MQ. Using IBM Tivoli OMEGAMON XE for WebSphere MQ Configuration. Tivoli. Version 3.7.

IBM. IBM Tivoli OMEGAMON XE for WebSphere MQ. Using IBM Tivoli OMEGAMON XE for WebSphere MQ Configuration. Tivoli. Version 3.7. Tivoli IBM Tivoli OMEGAMON XE for WebSphere MQ IBM Version 3.7.0 Using IBM Tivoli OMEGAMON XE for WebSphere MQ Configuration SC31-6889-00 12 1 2 Tivoli IBM Tivoli OMEGAMON XE for WebSphere MQ IBM Version

More information

IBM. IBM Tivoli OMEGAMON XE for WebSphere MQ. Using IBM Tivoli OMEGAMON XE for WebSphere MQ Monitoring. Tivoli. Version 3.7.

IBM. IBM Tivoli OMEGAMON XE for WebSphere MQ. Using IBM Tivoli OMEGAMON XE for WebSphere MQ Monitoring. Tivoli. Version 3.7. Tivoli IBM Tivoli OMEGAMON XE for WebSphere MQ IBM Version 3.7.0 Using IBM Tivoli OMEGAMON XE for WebSphere MQ Monitoring SC31-6888-00 12 1 2 Tivoli IBM Tivoli OMEGAMON XE for WebSphere MQ IBM Version

More information

IBM. OMEGAMON XE for IMS on z/os. Using IBM Tivoli OMEGAMON XE for IMS on z/os. Tivoli. Version GC

IBM. OMEGAMON XE for IMS on z/os. Using IBM Tivoli OMEGAMON XE for IMS on z/os. Tivoli. Version GC Tivoli OMEGAMON XE for IMS on z/os IBM Version 3.1.0 Using IBM Tivoli OMEGAMON XE for IMS on z/os GC32-9351-00 12 1 2 Tivoli OMEGAMON XE for IMS on z/os IBM Version 3.1.0 Using IBM Tivoli OMEGAMON XE for

More information

IBM. OMEGAVIEW and OMEGAVIEW II for the Enterprise. Configuring OMEGAVIEW and OMEGAVIEW II for the Enterprise. Tivoli. Version 3.1.

IBM. OMEGAVIEW and OMEGAVIEW II for the Enterprise. Configuring OMEGAVIEW and OMEGAVIEW II for the Enterprise. Tivoli. Version 3.1. Tivoli OMEGAVIEW and OMEGAVIEW II for the Enterprise IBM Version 3.1.0 Configuring OMEGAVIEW and OMEGAVIEW II for the Enterprise SC32-9426-00 12 1 2 Tivoli OMEGAVIEW and OMEGAVIEW II for the Enterprise

More information

Version Monitoring Agent User s Guide SC

Version Monitoring Agent User s Guide SC Tivoli IBM Tivoli Advanced Catalog Management for z/os Version 02.01.00 Monitoring Agent User s Guide SC23-7974-00 Tivoli IBM Tivoli Advanced Catalog Management for z/os Version 02.01.00 Monitoring Agent

More information

IBM Tivoli OMEGAMON XE for Storage on z/os Version Tuning Guide SC

IBM Tivoli OMEGAMON XE for Storage on z/os Version Tuning Guide SC IBM Tivoli OMEGAMON XE for Storage on z/os Version 5.1.0 Tuning Guide SC27-4380-00 IBM Tivoli OMEGAMON XE for Storage on z/os Version 5.1.0 Tuning Guide SC27-4380-00 Note Before using this information

More information

Tivoli Monitoring Agent for IBM Tivoli Monitoring 5.x Endpoint

Tivoli Monitoring Agent for IBM Tivoli Monitoring 5.x Endpoint Tivoli Monitoring Agent for IBM Tivoli Monitoring 5.x Endpoint Version 6.1.0 User s Guide SC32-9490-00 Tivoli Monitoring Agent for IBM Tivoli Monitoring 5.x Endpoint Version 6.1.0 User s Guide SC32-9490-00

More information

Tivoli IBM OMEGAMON z/os Management Console

Tivoli IBM OMEGAMON z/os Management Console Tivoli IBM OMEGAMON z/os Management Console Version 1.1.1 Planning, Installation, and Configuration Guide GC32-1902-00 Tivoli IBM OMEGAMON z/os Management Console Version 1.1.1 Planning, Installation,

More information

IBM Tivoli Monitoring for Web Infrastructure: WebSphere Application Server. User s Guide. Version SC

IBM Tivoli Monitoring for Web Infrastructure: WebSphere Application Server. User s Guide. Version SC IBM Tivoli Monitoring for Web Infrastructure: WebSphere Application Server User s Guide Version 5.1.1 SC23-4705-01 IBM Tivoli Monitoring for Web Infrastructure: WebSphere Application Server User s Guide

More information

IBM Tivoli Monitoring for Databases: DB2. User s Guide. Version SC

IBM Tivoli Monitoring for Databases: DB2. User s Guide. Version SC IBM Tivoli Monitoring for Databases: DB2 User s Guide Version 5.1.0 SC23-4726-00 IBM Tivoli Monitoring for Databases: DB2 User s Guide Version 5.1.0 SC23-4726-00 Note Before using this information and

More information

Version Release Notes GI

Version Release Notes GI Tivoli IBM Tivoli OMEGAMON XE for CICS on z/os Version 3.1.0 Release Notes GI11-4086-00 Tivoli IBM Tivoli OMEGAMON XE for CICS on z/os Version 3.1.0 Release Notes GI11-4086-00 Note Before using this information

More information

IBM Tivoli Monitoring for Business Integration. User s Guide. Version SC

IBM Tivoli Monitoring for Business Integration. User s Guide. Version SC IBM Tioli Monitoring for Business Integration User s Guide Version 5.1.1 SC32-1403-00 IBM Tioli Monitoring for Business Integration User s Guide Version 5.1.1 SC32-1403-00 Note Before using this information

More information

Tivoli OMEGAMON z/os Management Console

Tivoli OMEGAMON z/os Management Console Tivoli OMEGAMON z/os Management Console Version 1.1.0 Getting Started with IBM OMEGAMON z/os Management Console SC32-9503-00 Tivoli OMEGAMON z/os Management Console Version 1.1.0 Getting Started with

More information

Tivoli Management Solution for Microsoft SQL. Rule Designer. Version 1.1

Tivoli Management Solution for Microsoft SQL. Rule Designer. Version 1.1 Tivoli Management Solution for Microsoft SQL Rule Designer Version 1.1 Tivoli Management Solution for Microsoft SQL Rule Designer Version 1.1 Tivoli Management Solution for Microsoft SQL Copyright Notice

More information

IBM. OMEGAMON II for IMS. Application Trace Facility. Tivoli. Version SC

IBM. OMEGAMON II for IMS. Application Trace Facility. Tivoli. Version SC Tivoli OMEGAMON II for IMS IBM Version 5.5.0 Application Trace Facility SC32-9470-00 12 1 2 Tivoli OMEGAMON II for IMS IBM Version 5.5.0 Application Trace Facility SC32-9470-00 12 2 Note Before using this

More information

Client Installation and User's Guide

Client Installation and User's Guide IBM Tivoli Storage Manager FastBack for Workstations 6.1.2.0 Client Installation and User's Guide SC27-2809-01 IBM Tivoli Storage Manager FastBack for Workstations 6.1.2.0 Client Installation and User's

More information

DB2 Warehouse Manager for OS/390 and z/os White Paper

DB2 Warehouse Manager for OS/390 and z/os White Paper DB2 Warehouse Manager for OS/390 and z/os White Paper By Sarah Ellis and Cathy Drummond October 2001 Copyright IBM Corp. 2001. All Rights Reserved. US Government Users Restricted Rights - Use, duplication

More information

Business Intelligence Tutorial

Business Intelligence Tutorial IBM DB2 Universal Database Business Intelligence Tutorial Version 7 IBM DB2 Universal Database Business Intelligence Tutorial Version 7 Before using this information and the product it supports, be sure

More information

Program Directory for IBM Tivoli OMEGAMON XE for Mainframe Networks V Program Number 5608-C09. FMIDs AKN3100, AKON520. for Use with OS/390 z/os

Program Directory for IBM Tivoli OMEGAMON XE for Mainframe Networks V Program Number 5608-C09. FMIDs AKN3100, AKON520. for Use with OS/390 z/os IBM Program Directory for IBM Tivoli OMEGAMON XE for Mainframe Networks V2.1.0 Program Number 5608-C09 FMIDs AKN3100, AKON520 for Use with OS/390 z/os Document Date: February 2005 GI11-4047-00 Note! Before

More information

User sguidefortheviewer

User sguidefortheviewer Tivoli Decision Support for OS/390 User sguidefortheviewer Version 1.6 SH19-4517-03 Tivoli Decision Support for OS/390 User sguidefortheviewer Version 1.6 SH19-4517-03 Note Before using this information

More information

Oracle SQL. murach s. and PL/SQL TRAINING & REFERENCE. (Chapter 2)

Oracle SQL. murach s. and PL/SQL TRAINING & REFERENCE. (Chapter 2) TRAINING & REFERENCE murach s Oracle SQL and PL/SQL (Chapter 2) works with all versions through 11g Thanks for reviewing this chapter from Murach s Oracle SQL and PL/SQL. To see the expanded table of contents

More information

IBM Tivoli Decision Support for z/os Version Messages and Problem Determination IBM SH

IBM Tivoli Decision Support for z/os Version Messages and Problem Determination IBM SH IBM Tivoli Decision Support for z/os Version 1.8.2 Messages and Problem Determination IBM SH19-6902-15 IBM Tivoli Decision Support for z/os Version 1.8.2 Messages and Problem Determination IBM SH19-6902-15

More information

Client Installation and User's Guide

Client Installation and User's Guide IBM Tivoli Storage Manager FastBack for Workstations Version 7.1 Client Installation and User's Guide SC27-2809-03 IBM Tivoli Storage Manager FastBack for Workstations Version 7.1 Client Installation

More information

Information Catalog Center Administration Guide

Information Catalog Center Administration Guide IBM DB2 Warehouse Manager Information Catalog Center Administration Guide Version 8 SC27-1125-00 IBM DB2 Warehouse Manager Information Catalog Center Administration Guide Version 8 SC27-1125-00 Before

More information

IBM Tivoli. Storage Resource Manager NAS Component. Supplement. Version 1 Release 1 TSOS-RG

IBM Tivoli. Storage Resource Manager NAS Component. Supplement. Version 1 Release 1 TSOS-RG IBM Tivoli Storage Resource Manager NAS Component Supplement Version 1 Release 1 TSOS-RG1100-092502 Tivoli Storage Resource Manager NAS Supplement Printed Date: September, 2002 Publication Number: TSNS-UG110-092502

More information

IBM DB2 Query Patroller. Administration Guide. Version 7 SC

IBM DB2 Query Patroller. Administration Guide. Version 7 SC IBM DB2 Query Patroller Administration Guide Version 7 SC09-2958-00 IBM DB2 Query Patroller Administration Guide Version 7 SC09-2958-00 Before using this information and the product it supports, be sure

More information

Tivoli Management Solution for Domino. Installation and Setup Guide. Version GC

Tivoli Management Solution for Domino. Installation and Setup Guide. Version GC Tivoli Management Solution for Domino Installation and Setup Guide Version 3.2.0 GC32-0755-00 Tivoli Management Solution for Domino Installation and Setup Guide Version 3.2.0 GC32-0755-00 Tivoli Management

More information

IBM Tivoli Decision Support for z/os Version Administration Guide and Reference IBM SH

IBM Tivoli Decision Support for z/os Version Administration Guide and Reference IBM SH IBM Tivoli Decision Support for z/os Version 1.8.2 Administration Guide and Reference IBM SH19-6816-14 IBM Tivoli Decision Support for z/os Version 1.8.2 Administration Guide and Reference IBM SH19-6816-14

More information

Tivoli IBM Tivoli Monitoring

Tivoli IBM Tivoli Monitoring Tivoli IBM Tivoli Monitoring Version 6.2.2 Administrator s Guide SC32-9408-03 Tivoli IBM Tivoli Monitoring Version 6.2.2 Administrator s Guide SC32-9408-03 Note Before using this information and the product

More information

Tivoli Enterprise Portal

Tivoli Enterprise Portal IBM Tivoli Monitoring Version 6.2.3 Fix Pack 1 Tivoli Enterprise Portal User's Guide SC32-9409-05 IBM Tivoli Monitoring Version 6.2.3 Fix Pack 1 Tivoli Enterprise Portal User's Guide SC32-9409-05 Note

More information

Tivoli Management Solution for Microsoft SQL. Statistics Builder. Version 1.1

Tivoli Management Solution for Microsoft SQL. Statistics Builder. Version 1.1 Tivoli Management Solution for Microsoft SQL Statistics Builder Version 1.1 Tivoli Management Solution for Microsoft SQL Statistics Builder Version 1.1 Tivoli Management Solution for Microsoft SQL Copyright

More information

IBM Tivoli Composite Application Manager for WebSphere Application Server Version 7.1. Installation Guide

IBM Tivoli Composite Application Manager for WebSphere Application Server Version 7.1. Installation Guide IBM Tivoli Composite Application Manager for WebSphere Application Server Version 7.1 Installation Guide IBM Tivoli Composite Application Manager for WebSphere Application Server Version 7.1 Installation

More information

IBM Tivoli OMEGAMON DE for Distributed Systems

IBM Tivoli OMEGAMON DE for Distributed Systems IBM Tivoli OMEGAMON DE for Distributed Systems Release Notes Version 3.0.1 GI11-4618-00 +---- Note ------------------------------------------------------------+ Before using this information and the product

More information

Note: Before using this information and the product it supports, read the information in Notices on on page 102.

Note: Before using this information and the product it supports, read the information in Notices on on page 102. IBM Tivoli Storage Area Network Manager, Version 1.3.1 Warehouse Enablement Pack, Version 1.2.0 Implementation Guide for Tivoli Data Warehouse, Version 1.2 SC32-9078-01 Note: Before using this information

More information

IBM Tivoli Monitoring for Databases. Release Notes. Version SC

IBM Tivoli Monitoring for Databases. Release Notes. Version SC IBM Tivoli Monitoring for Databases Release Notes Version 5.1.1 SC23-4851-00 IBM Tivoli Monitoring for Databases Release Notes Version 5.1.1 SC23-4851-00 Note Before using this information and the product

More information

Tivoli IBM Tivoli Advanced Catalog Management for z/os

Tivoli IBM Tivoli Advanced Catalog Management for z/os Tioli IBM Tioli Adanced Catalog Management for z/os Version 2.2.0 Monitoring Agent User s Guide SC23-9818-00 Tioli IBM Tioli Adanced Catalog Management for z/os Version 2.2.0 Monitoring Agent User s Guide

More information

User s Guide for Software Distribution

User s Guide for Software Distribution IBM Tivoli Configuration Manager User s Guide for Software Distribution Version 4.2.1 SC23-4711-01 IBM Tivoli Configuration Manager User s Guide for Software Distribution Version 4.2.1 SC23-4711-01 Note

More information

IBM Infrastructure Suite for z/vm and Linux: Introduction IBM Tivoli OMEGAMON XE on z/vm and Linux

IBM Infrastructure Suite for z/vm and Linux: Introduction IBM Tivoli OMEGAMON XE on z/vm and Linux IBM Infrastructure Suite for z/vm and Linux: Introduction IBM Tivoli OMEGAMON XE on z/vm and Linux August/September 2015 Please Note IBM s statements regarding its plans, directions, and intent are subject

More information

IBM Security Access Manager for Enterprise Single Sign-On Version 8.2. Administrator Guide SC

IBM Security Access Manager for Enterprise Single Sign-On Version 8.2. Administrator Guide SC IBM Security Access Manager for Enterprise Single Sign-On Version 8.2 Administrator Guide SC23-9951-03 IBM Security Access Manager for Enterprise Single Sign-On Version 8.2 Administrator Guide SC23-9951-03

More information

IBM Tivoli Federated Identity Manager Version Installation Guide GC

IBM Tivoli Federated Identity Manager Version Installation Guide GC IBM Tivoli Federated Identity Manager Version 6.2.2 Installation Guide GC27-2718-01 IBM Tivoli Federated Identity Manager Version 6.2.2 Installation Guide GC27-2718-01 Note Before using this information

More information

Understanding the Relationship with Domain Managers

Understanding the Relationship with Domain Managers 4 CHAPTER Understanding the Relationship with Domain Managers Prime Central for HCS reports the events generated by underlying domain managers. Domain managers may also discover topology and relationships

More information

IBM Tivoli Composite Application Manager for Applications Version 7.3. WebSphere MQ Monitoring Agent User's Guide IBM SC

IBM Tivoli Composite Application Manager for Applications Version 7.3. WebSphere MQ Monitoring Agent User's Guide IBM SC IBM Tivoli Composite Application Manager for Applications Version 7.3 WebSphere MQ Monitoring Agent User's Guide IBM SC14-7523-01 IBM Tivoli Composite Application Manager for Applications Version 7.3

More information

Client Installation and User's Guide

Client Installation and User's Guide IBM Tivoli Storage Manager FastBack for Workstations Version 7.1.1 Client Installation and User's Guide SC27-2809-04 IBM Tivoli Storage Manager FastBack for Workstations Version 7.1.1 Client Installation

More information

IBM Monitoring Agent for Citrix Virtual Desktop Infrastructure 7.2 FP3. User's Guide IBM SC

IBM Monitoring Agent for Citrix Virtual Desktop Infrastructure 7.2 FP3. User's Guide IBM SC IBM Monitoring Agent for Citrix Virtual Desktop Infrastructure 7.2 FP3 User's Guide IBM SC14-7487-02 IBM Monitoring Agent for Citrix Virtual Desktop Infrastructure 7.2 FP3 User's Guide IBM SC14-7487-02

More information

Error Message Reference

Error Message Reference Security Policy Manager Version 7.1 Error Message Reference GC23-9477-01 Security Policy Manager Version 7.1 Error Message Reference GC23-9477-01 Note Before using this information and the product it

More information

Tivoli Module Builder TivoliReadyQuickStartUser sguide Version 2.4

Tivoli Module Builder TivoliReadyQuickStartUser sguide Version 2.4 Tivoli Module Builder TivoliReadyQuickStartUser sguide Version 2.4 Tivoli Module Builder TivoliReadyQuickStartUser sguide Version 2.4 Tivoli Module Builder QuickStart User s Guide Copyright Notice Copyright

More information

IBM. Planning and Installation. IBM Tivoli Workload Scheduler. Version 9 Release 1 SC

IBM. Planning and Installation. IBM Tivoli Workload Scheduler. Version 9 Release 1 SC IBM Tivoli Workload Scheduler IBM Planning and Installation Version 9 Release 1 SC32-1273-13 IBM Tivoli Workload Scheduler IBM Planning and Installation Version 9 Release 1 SC32-1273-13 Note Before using

More information

SAS Model Manager 2.3

SAS Model Manager 2.3 SAS Model Manager 2.3 Administrator's Guide SAS Documentation The correct bibliographic citation for this manual is as follows: SAS Institute Inc. 2010. SAS Model Manager 2.3: Administrator's Guide. Cary,

More information

IBM Tivoli Decision Support for z/os Version Distributed Systems Performance Feature Guide and Reference IBM SH

IBM Tivoli Decision Support for z/os Version Distributed Systems Performance Feature Guide and Reference IBM SH IBM Tivoli Decision Support for z/os Version 1.8.2 Distributed Systems Performance Feature Guide and Reference IBM SH19-4018-13 IBM Tivoli Decision Support for z/os Version 1.8.2 Distributed Systems Performance

More information

IBM Tivoli Decision Support for z/os Version CICS Performance Feature Guide and Reference IBM SH

IBM Tivoli Decision Support for z/os Version CICS Performance Feature Guide and Reference IBM SH IBM Tivoli Decision Support for z/os Version 1.8.2 CICS Performance Feature Guide and Reference IBM SH19-6820-12 IBM Tivoli Decision Support for z/os Version 1.8.2 CICS Performance Feature Guide and Reference

More information

Topaz for Java Performance Installation Guide. Release 16.03

Topaz for Java Performance Installation Guide. Release 16.03 Topaz for Java Performance Installation Guide Release 16.03 ii Topaz for Java Performance Installation Guide Please direct questions about Topaz for Java Performance or comments on this document to: Topaz

More information

Note: Before using this information and the product it supports, read the information in Notices on page 88.

Note: Before using this information and the product it supports, read the information in Notices on page 88. o IBM Tivoli Storage Resource Manager, Version 1.3 Warehouse Enablement Pack, Version 1.2.. Implementation Guide for Tivoli Data Warehouse, Version 1.2 SC32-977-1 Note: Before using this information and

More information

IBM SmartCloud Application Performance Management Entry Edition - VM Image Version 7 Release 7. Installation and Deployment Guide IBM SC

IBM SmartCloud Application Performance Management Entry Edition - VM Image Version 7 Release 7. Installation and Deployment Guide IBM SC IBM SmartCloud Application Performance Management Entry Edition - VM Image Version 7 Release 7 Installation and Deployment Guide IBM SC27-5334-01 IBM SmartCloud Application Performance Management Entry

More information

Problem Determination Guide (Revised March 30, 2007)

Problem Determination Guide (Revised March 30, 2007) IBM Tivoli Configuration Manager for Automated Teller Machines Problem Determination Guide (Revised March 30, 2007) Version 2.1 SC32-1411-01 IBM Tivoli Configuration Manager for Automated Teller Machines

More information

About Your Software IBM

About Your Software IBM About Your Software About Your Software Note Before using this information and the product it supports, be sure to read Appendix. Viewing the license agreement on page 19 and Notices on page 21. First

More information

SAS/ASSIST Software Setup

SAS/ASSIST Software Setup 173 APPENDIX 3 SAS/ASSIST Software Setup Appendix Overview 173 Setting Up Graphics Devices 173 Setting Up Remote Connect Configurations 175 Adding a SAS/ASSIST Button to Your Toolbox 176 Setting Up HTML

More information

IBM. IBM Tivoli Candle Products Messages. Volume 4 (KONCV - OC) Tivoli. Version SC

IBM. IBM Tivoli Candle Products Messages. Volume 4 (KONCV - OC) Tivoli. Version SC Tivoli IBM Tivoli Candle Products Messages IBM Version 1.0.3 Volume 4 (KONCV - OC) SC32-9419-02 12 1 2 Tivoli IBM Tivoli Candle Products Messages IBM Version 1.0.3 Volume 4 (KONCV - OC) SC32-9419-02 Note

More information

Overview Guide. Mainframe Connect 15.0

Overview Guide. Mainframe Connect 15.0 Overview Guide Mainframe Connect 15.0 DOCUMENT ID: DC37572-01-1500-01 LAST REVISED: August 2007 Copyright 1991-2007 by Sybase, Inc. All rights reserved. This publication pertains to Sybase software and

More information

Tivoli Common Reporting V2.x. Reporting with Tivoli Data Warehouse

Tivoli Common Reporting V2.x. Reporting with Tivoli Data Warehouse Tivoli Common Reporting V2.x Reporting with Tivoli Data Warehouse Preethi C Mohan IBM India Ltd. India Software Labs, Bangalore +91 80 40255077 preethi.mohan@in.ibm.com Copyright IBM Corporation 2012 This

More information

About Your Software Windows NT Workstation 4.0 Windows 98 Windows 95 Applications and Support Software

About Your Software Windows NT Workstation 4.0 Windows 98 Windows 95 Applications and Support Software IBM Personal Computer About Your Software Windows NT Workstation 4.0 Windows 98 Windows 95 Applications and Support Software IBM Personal Computer About Your Software Windows NT Workstation 4.0 Windows

More information

User Management Guide

User Management Guide IBM Tivoli Monitoring for Databases: Oracle User Management Guide Version 5.1.0 GC23-4731-00 IBM Tivoli Monitoring for Databases: Oracle User Management Guide Version 5.1.0 GC23-4731-00 Note Before using

More information

Introduction and Planning Guide

Introduction and Planning Guide Content Manager OnDemand for Multiplatforms Introduction and Planning Guide Version 7.1 GC27-0839-00 Content Manager OnDemand for Multiplatforms Introduction and Planning Guide Version 7.1 GC27-0839-00

More information

1. ECI Hosted Clients Installing Release 6.3 for the First Time (ECI Hosted) Upgrading to Release 6.3SP2 (ECI Hosted)

1. ECI Hosted Clients Installing Release 6.3 for the First Time (ECI Hosted) Upgrading to Release 6.3SP2 (ECI Hosted) 1. ECI Hosted Clients........................................................................................... 2 1.1 Installing Release 6.3 for the First Time (ECI Hosted)...........................................................

More information

IBM Personal Computer. About Your Software Windows NT Workstation 4.0, Applications, and Support Software

IBM Personal Computer. About Your Software Windows NT Workstation 4.0, Applications, and Support Software IBM Personal Computer About Your Software Windows NT Workstation 4.0, Applications, and Support Software IBM Personal Computer About Your Software Windows NT Workstation 4.0, Applications, and Support

More information

InfoSphere Master Data Management Reference Data Management Hub Version 10 Release 0. User s Guide GI

InfoSphere Master Data Management Reference Data Management Hub Version 10 Release 0. User s Guide GI InfoSphere Master Data Management Reference Data Management Hub Version 10 Release 0 User s Guide GI13-2637-00 InfoSphere Master Data Management Reference Data Management Hub Version 10 Release 0 User

More information

DB2. Migration Guide. DB2 Version 9 GC

DB2. Migration Guide. DB2 Version 9 GC DB2 DB2 Version 9 for Linux, UNIX, and Windows Migration Guide GC10-4237-00 DB2 DB2 Version 9 for Linux, UNIX, and Windows Migration Guide GC10-4237-00 Before using this information and the product it

More information

Reporting and Graphing

Reporting and Graphing Tivoli Management Solution for Microsoft SQL Reporting and Graphing Version 1.1 Tivoli Management Solution for Microsoft SQL Reporting and Graphing Version 1.1 Tivoli Management Solution for Microsoft

More information

IBM Spectrum Protect HSM for Windows Version Administration Guide IBM

IBM Spectrum Protect HSM for Windows Version Administration Guide IBM IBM Spectrum Protect HSM for Windows Version 8.1.0 Administration Guide IBM IBM Spectrum Protect HSM for Windows Version 8.1.0 Administration Guide IBM Note: Before you use this information and the product

More information

TME 10 Reporter Release Notes

TME 10 Reporter Release Notes TME 10 Reporter Release Notes Version 2.0 April, 1997 TME 10 Reporter (April 1997) Copyright Notice Copyright 1991, 1997 by Tivoli Systems, an IBM Company, including this documentation and all software.

More information

IBM FileNet Business Process Framework Version 4.1. Explorer Handbook GC

IBM FileNet Business Process Framework Version 4.1. Explorer Handbook GC IBM FileNet Business Process Framework Version 4.1 Explorer Handbook GC31-5515-06 IBM FileNet Business Process Framework Version 4.1 Explorer Handbook GC31-5515-06 Note Before using this information and

More information

Netcool/OMNIbus Gateway for Siebel Communications Version 5.0. Reference Guide. November 30, 2012 IBM SC

Netcool/OMNIbus Gateway for Siebel Communications Version 5.0. Reference Guide. November 30, 2012 IBM SC Netcool/OMNIbus Gateway for Siebel Communications Version 5.0 Reference Guide November 30, 2012 IBM SC23-6394-03 Netcool/OMNIbus Gateway for Siebel Communications Version 5.0 Reference Guide November

More information

ZL UA Configuring Exchange 2010 for Archiving Guide. Version 7.0

ZL UA Configuring Exchange 2010 for Archiving Guide. Version 7.0 ZL UA Configuring Exchange 2010 for Archiving Guide Version 7.0 ZL Technologies, Inc. Copyright 2011 ZL Technologies, Inc.All rights reserved ZL Technologies, Inc. ( ZLTI, formerly known as ZipLip ) and

More information

IBM Exam C IBM Tivoli Monitoring V6.3 Implementation Version: 6.0 [ Total Questions: 120 ]

IBM Exam C IBM Tivoli Monitoring V6.3 Implementation Version: 6.0 [ Total Questions: 120 ] s@lm@n IBM Exam C9560-507 IBM Tivoli Monitoring V6.3 Implementation Version: 6.0 [ Total Questions: 120 ] Question No : 1 A customer must perform trend analysis for future growth. Which product should

More information

HYPERION SYSTEM 9 BI+ ANALYTIC SERVICES RELEASE 9.2 ANALYTIC SQL INTERFACE GUIDE

HYPERION SYSTEM 9 BI+ ANALYTIC SERVICES RELEASE 9.2 ANALYTIC SQL INTERFACE GUIDE HYPERION SYSTEM 9 BI+ ANALYTIC SERVICES RELEASE 9.2 ANALYTIC SQL INTERFACE GUIDE Copyright 1998 2006 Hyperion Solutions Corporation. All rights reserved. Hyperion, the Hyperion H logo, and Hyperion s product

More information

Document Management System GUI. v6.0 User Guide

Document Management System GUI. v6.0 User Guide Document Management System GUI v6.0 User Guide Copyright Copyright HelpSystems, LLC. All rights reserved. www.helpsystems.com US: +1 952-933-0609 Outside the U.S.: +44 (0) 870 120 3148 IBM, AS/400, OS/400,

More information

Replication Server Heterogeneous Edition

Replication Server Heterogeneous Edition Overview Guide Replication Server Heterogeneous Edition 15.2 DOCUMENT ID: DC01055-01-1520-01 LAST REVISED: August 2009 Copyright 2009 by Sybase, Inc. All rights reserved. This publication pertains to Sybase

More information

Central Administration Console Installation and User's Guide

Central Administration Console Installation and User's Guide IBM Tivoli Storage Manager FastBack for Workstations Version 7.1 Central Administration Console Installation and User's Guide SC27-2808-03 IBM Tivoli Storage Manager FastBack for Workstations Version

More information

Extended Search Administration

Extended Search Administration IBM Lotus Extended Search Extended Search Administration Version 4 Release 0.1 SC27-1404-02 IBM Lotus Extended Search Extended Search Administration Version 4 Release 0.1 SC27-1404-02 Note! Before using

More information

DBLOAD Procedure Reference

DBLOAD Procedure Reference 131 CHAPTER 10 DBLOAD Procedure Reference Introduction 131 Naming Limits in the DBLOAD Procedure 131 Case Sensitivity in the DBLOAD Procedure 132 DBLOAD Procedure 132 133 PROC DBLOAD Statement Options

More information

Business Insight Authoring

Business Insight Authoring Business Insight Authoring Getting Started Guide ImageNow Version: 6.7.x Written by: Product Documentation, R&D Date: August 2016 2014 Perceptive Software. All rights reserved CaptureNow, ImageNow, Interact,

More information

DupScout DUPLICATE FILES FINDER

DupScout DUPLICATE FILES FINDER DupScout DUPLICATE FILES FINDER User Manual Version 10.3 Dec 2017 www.dupscout.com info@flexense.com 1 1 Product Overview...3 2 DupScout Product Versions...7 3 Using Desktop Product Versions...8 3.1 Product

More information

Tivoli Storage Manager

Tivoli Storage Manager Tivoli Storage Manager Version 6.1 Server Upgrade Guide SC23-9554-01 Tivoli Storage Manager Version 6.1 Server Upgrade Guide SC23-9554-01 Note Before using this information and the product it supports,

More information

IBM Tivoli Agentless Monitoring for Windows Operating Systems Version (Revised) User's Guide SC

IBM Tivoli Agentless Monitoring for Windows Operating Systems Version (Revised) User's Guide SC IBM Tivoli Agentless Monitoring for Windows Operating Systems Version 6.2.1 (Revised) User's Guide SC23-9765-01 IBM Tivoli Agentless Monitoring for Windows Operating Systems Version 6.2.1 (Revised) User's

More information

IBM Tivoli Monitoring: AIX Premium Agent Version User's Guide SA

IBM Tivoli Monitoring: AIX Premium Agent Version User's Guide SA Tioli IBM Tioli Monitoring: AIX Premium Agent Version 6.2.2.1 User's Guide SA23-2237-06 Tioli IBM Tioli Monitoring: AIX Premium Agent Version 6.2.2.1 User's Guide SA23-2237-06 Note Before using this information

More information

SAS Data Integration Studio 3.3. User s Guide

SAS Data Integration Studio 3.3. User s Guide SAS Data Integration Studio 3.3 User s Guide The correct bibliographic citation for this manual is as follows: SAS Institute Inc. 2006. SAS Data Integration Studio 3.3: User s Guide. Cary, NC: SAS Institute

More information

Tivoli Data Warehouse

Tivoli Data Warehouse Tivoli Data Warehouse Version 1.3 Tivoli Data Warehouse Troubleshooting Guide SC09-7776-01 Tivoli Data Warehouse Version 1.3 Tivoli Data Warehouse Troubleshooting Guide SC09-7776-01 Note Before using

More information

Tivoli Tivoli Decision Support for z/os

Tivoli Tivoli Decision Support for z/os Tivoli Tivoli Decision Support for z/os Version 1.8.1 Messages and Problem Determination SH19-6902-13 Tivoli Tivoli Decision Support for z/os Version 1.8.1 Messages and Problem Determination SH19-6902-13

More information

IBM Optim. Compare Introduction. Version7Release3

IBM Optim. Compare Introduction. Version7Release3 IBM Optim Compare Introduction Version7Release3 IBM Optim Compare Introduction Version7Release3 Note Before using this information and the product it supports, read the information in Notices on page

More information

Administrator for Enterprise Clients: User s Guide. Second Edition

Administrator for Enterprise Clients: User s Guide. Second Edition Administrator for Enterprise Clients: User s Guide Second Edition The correct bibliographic citation for this manual is as follows: SAS Institute Inc. 2002. Administrator for Enterprise Clients: User s

More information

IBM Optim. Edit User Manual. Version7Release3

IBM Optim. Edit User Manual. Version7Release3 IBM Optim Edit User Manual Version7Release3 IBM Optim Edit User Manual Version7Release3 Note Before using this information and the product it supports, read the information in Notices on page 79. Version

More information

APPENDIX 4 Migrating from QMF to SAS/ ASSIST Software. Each of these steps can be executed independently.

APPENDIX 4 Migrating from QMF to SAS/ ASSIST Software. Each of these steps can be executed independently. 255 APPENDIX 4 Migrating from QMF to SAS/ ASSIST Software Introduction 255 Generating a QMF Export Procedure 255 Exporting Queries from QMF 257 Importing QMF Queries into Query and Reporting 257 Alternate

More information

IBM Tivoli Storage FlashCopy Manager Version Installation and User's Guide for Windows IBM

IBM Tivoli Storage FlashCopy Manager Version Installation and User's Guide for Windows IBM IBM Tivoli Storage FlashCopy Manager Version 4.1.3 Installation and User's Guide for Windows IBM IBM Tivoli Storage FlashCopy Manager Version 4.1.3 Installation and User's Guide for Windows IBM Note:

More information

Installation and Setup Guide

Installation and Setup Guide IBM Tioli Monitoring for Business Integration Installation and Setup Guide Version 5.1.1 SC32-1402-00 IBM Tioli Monitoring for Business Integration Installation and Setup Guide Version 5.1.1 SC32-1402-00

More information

Understanding Advanced Workflow

Understanding Advanced Workflow IBM Content Manager for iseries Understanding Advanced Workflow Version 5 Release 1 SC27-1138-00 IBM Content Manager for iseries Understanding Advanced Workflow Version 5 Release 1 SC27-1138-00 Note Before

More information

SAS IT Resource Management Forecasting. Setup Specification Document. A SAS White Paper

SAS IT Resource Management Forecasting. Setup Specification Document. A SAS White Paper SAS IT Resource Management Forecasting Setup Specification Document A SAS White Paper Table of Contents Introduction to SAS IT Resource Management Forecasting... 1 Getting Started with the SAS Enterprise

More information

IBM. Documentation. IBM Sterling Connect:Direct Process Language. Version 5.3

IBM. Documentation. IBM Sterling Connect:Direct Process Language. Version 5.3 IBM Sterling Connect:Direct Process Language IBM Documentation Version 5.3 IBM Sterling Connect:Direct Process Language IBM Documentation Version 5.3 This edition applies to Version 5 Release 3 of IBM

More information

Guide to User Interface 4.3

Guide to User Interface 4.3 Datatel Colleague Guide to User Interface 4.3 Release 18 June 24, 2011 For corrections and clarifications to this manual, see AnswerNet page 1926.37. Guide to User Interface 4.3 All Rights Reserved The

More information

IBM Copy Services Manager Version 6 Release 1. Release Notes August 2016 IBM

IBM Copy Services Manager Version 6 Release 1. Release Notes August 2016 IBM IBM Copy Services Manager Version 6 Release 1 Release Notes August 2016 IBM Note: Before using this information and the product it supports, read the information in Notices on page 9. Edition notice This

More information

Exchange 2000 Agent Installation Guide

Exchange 2000 Agent Installation Guide IBM Tivoli Identity Manager Exchange 2000 Agent Installation Guide Version 4.5.0 SC32-1156-03 IBM Tivoli Identity Manager Exchange 2000 Agent Installation Guide Version 4.5.0 SC32-1156-03 Note: Before

More information