Mass Data Extraction User's Guide

Similar documents
IBM Tivoli Netcool Performance Manager Wireline Component Document Revision R2E1. Mass Data Extraction User's Guide IBM

IBM Tivoli Netcool Performance Manager Wireline Component October 2015 Document Revision R2E1. Pack Upgrade Guide IBM

DataView User and Administrator Guide

Road Map for the Typical Installation Option of IBM Tivoli Monitoring Products, Version 5.1.0

xseries Systems Management IBM Diagnostic Data Capture 1.0 Installation and User s Guide

IBM. Client Configuration Guide. IBM Explorer for z/os. Version 3 Release 1 SC

IBM Tivoli Netcool Performance Manager 1.4 Document Revision R2E1. Installing and Administering Tivoli Netcool Performance Manager Data Provider

Tivoli IBM Tivoli Advanced Catalog Management for z/os

Monitor Developer s Guide

IBM Agent Builder Version User's Guide IBM SC

IBM Tivoli Netcool Performance Manager Wireline Component October 2012 Document Revision R2E1. Pack Installation and Configuration Guide

IBM Tivoli Storage Manager for Windows Version Tivoli Monitoring for Tivoli Storage Manager

IBM Tivoli Monitoring: AIX Premium Agent Version User's Guide SA

IBM Tivoli Storage Manager for Windows Version 7.1. Installation Guide

Tivoli Application Dependency Discovery Manager Version 7 Release 2.1. Installation Guide

IBM. RSE for z/os User's Guide. IBM Explorer for z/os. Version 3 Release 1 SC

IBM Tivoli Netcool Performance Manager Wireline Component April 2015 Document Revision R2E1. Installing and Configuring Common Packs

IBM Cognos Dynamic Query Analyzer Version Installation and Configuration Guide IBM

Internet Information Server User s Guide

Installation Guide 1.3.1

iplanetwebserveruser sguide

IBM Tivoli Monitoring for Business Integration. User s Guide. Version SC

IBM EMM Reports Version 9 Release 1 October 25, Installation and Configuration Guide

Director Client Guide

IBM InfoSphere Information Server Integration Guide for IBM InfoSphere DataStage Pack for SAP BW

IBM Operational Decision Manager Version 8 Release 5. Installation Guide

Guide for the Dynamic RDBMS Stage

Tivoli Tivoli Intelligent ThinkDynamic Orchestrator

IBM Universal Behavior Exchange Toolkit Release June 24, User's Guide IBM

Connectivity Guide for Oracle Databases

Guide to Managing Common Metadata

IBM Tivoli Storage Manager for Windows Version Installation Guide

WebSphere Message Broker Monitoring Agent User's Guide

Tivoli Application Dependency Discovery Manager Version 7.3. Installation Guide IBM

IBM Unica Detect Version 8 Release 5 October 26, Administrator's Guide

IBM. Installing. IBM Emptoris Suite. Version

Tivoli Security Compliance Manager

IBM Tivoli Netcool Performance Manager Wireline Component April 2014 Document Revision R2E1. Pack Release Notes

Network Service Manager REST API Users Guide

Managing Server Installation and Customization Guide

IBM Spectrum Protect Snapshot for Oracle Version What's new Supporting multiple Oracle databases with a single instance IBM

Tivoli Tivoli Provisioning Manager

IBM Cloud Orchestrator Version Content Development Guide IBM

License Administrator s Guide

IBM Tivoli Directory Integrator 5.2: Readme

Administrator's Guide

Installing and Configuring Tivoli Enterprise Data Warehouse

High Availability Guide for Distributed Systems

IBM Operations Analytics - Predictive Insights Configuration and Administration Guide IBM

Adapters in the Mainframe Connectivity Suite User Guide

Tivoli Identity Manager. End User Guide. Version SC

Tivoli Monitoring: Windows OS Agent

Tivoli System Automation Application Manager

Version 10 Release 0 February IBM Marketing Platform Installation Guide IBM

IBM Tivoli Storage Manager for Virtual Environments Version Data Protection for VMware Installation Guide IBM

Tivoli Tivoli Provisioning Manager

Jazz for Service Management Version 1.1 FIx Pack 3 Beta. Configuration Guide Draft

Tivoli Business Systems Manager

Netcool Configuration Manager Version Installation and Configuration Guide R2E6 IBM

IBM Tivoli Privacy Manager for e-business. Installation Guide. Version 1.1 SC

Extended Search Administration

Tivoli Tivoli Intelligent ThinkDynamic Orchestrator

IBM. Installing and Configuring Operations and Engineering Dashboards

IBM Interact Version 9 Release 1 October 25, Installation Guide

Tivoli Tivoli Intelligent ThinkDynamic Orchestrator

IBM Security Access Manager for Web Version 7.0. Upgrade Guide SC

IBM i Version 7.2. Security Service Tools IBM

IBM Monitoring Agent for OpenStack Version User's Guide IBM SC

Tivoli Tivoli Provisioning Manager

IBM Tivoli Usage and Accounting Manager Version 7.3. Administering data processing

IBM Marketing Operations and Campaign Version 9 Release 0 January 15, Integration Guide

IBM Tivoli Monitoring for Virtual Environments: Dashboard, Reporting, and Capacity Planning Version 7.1 Fix Pack 1. User s Guide SC

Common Server Administration Guide

Warehouse Summarization and Pruning Agent Version Fix Pack 1. User's Guide SC

IBM Network Performance Insight 1.3 Document Revision R2E2. Rapid SNMP device onboarding in Network Performance Insight IBM

IBM VisualAge for Java,Version3.5. Data Access Beans

Tivoli Identity Manager

IBM Tivoli Service Level Advisor. SLM Reports. Version 2.1 SC

Tivoli Application Dependency Discovery Manager Version 7 Release 2.1. SDK Developer's Guide

IBM Tivoli Enterprise Console. User s Guide. Version 3.9 SC

Tivoli Tivoli Provisioning Manager

IBM Tivoli Workload Scheduler for Applications Version 8.2 Release Notes

IBM Marketing Operations and Campaign Version 9 Release 1.1 November 26, Integration Guide

IBM i Version 7.2. Connecting to IBM i IBM i Access for Web IBM

High Availability Guide for Distributed Systems

Monitoring: Windows OS Agent Version Fix Pack 2 (Revised May 2010) User s Guide SC

IBM. Installing, configuring, using, and troubleshooting. IBM Operations Analytics for z Systems. Version 3 Release 1

IBM Tivoli Composite Application Manager for Microsoft Applications: Microsoft Active Directory Agent Fix Pack 13.

IBM Security Identity Manager Version 6.0. Installation Guide GC

IBM i Version 7.2. Networking TCP/IP troubleshooting IBM

IBM. Networking Simple Network Time Protocol. IBM i. Version 7.2

IBM Tivoli Storage Manager for Linux Version Tivoli Monitoring for Tivoli Storage Manager

Solutions for BSM Version 1.1. Solutions for BSM Guide

iseries Experience Reports Configuring Management Central Connections for Firewall Environments

IBM Tivoli Netcool Performance Manager 1.4 Document Revision R2E1. Installing and Using Solution Packs

IBM BigFix Inventory Version 9.5. Scalability Guide. Version 2 IBM

ImageUltra Builder Version 1.1. User Guide

IBM License Metric Tool 9.2.x. Scalability Guide. Version 2 IBM

IBM Security Access Manager for Web Version 7.0. Installation Guide GC

System i and System p. Capacity on Demand

Transcription:

Tioli Netcool Performance Manager 1.3 Wireline Component (Netcool/Proiso 5.2) Document Reision R2E2 Mass Data Extraction User's Guide

Note Before using this information and the product it supports, read the information in Notices on page 51. Copyright IBM Corporation 2010. US Goernment Users Restricted Rights Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.

Contents Preface............... Netcool/Proiso components......... Chapter 1. Introduction........ 1 Using the MDE............. 1 Data aggregation............. 1 Data customization............ 3 Chapter 2. MDE setup......... 5 Chapter 3. MDE API.......... 7 getresourcemetrictimeseries()........ 7 getresourceaggregatedmetrictimeseries().... 8 getresourcesnapshot()........... 11 Chapter 4. Mass Data Extraction command-line interface....... 13 Command-line options.......... 13 Batch mode.............. 14 Output Conentions and Formats....... 15 Naming of output files.......... 18 Creating a query............. 19 Sorting............... 20 Adding Alias, Name and Type Data..... 20 MDE Command Line Examples....... 21 Snapshot.............. 21 Time Series Examples.......... 23 Batched Output............ 28 Alias............... 32 Configuring additional element data..... 34 Chapter 5. DataStage integration... 39 Setting up an MDE engine stage instance.... 39 Engine Stage workflow.......... 41 Running the engine stage.......... 41 Chapter 6. MDE Security....... 43 Getting the security certificate from the TIP serer 43 Import security certificate to the JRE Keystore... 44 Chapter 7. MDE firewall access.... 45 Appendix A. Query Schema XSD... 47 Appendix B. Result Schema XSD... 49 Notices.............. 51 Trademarks.............. 53 Additional copyright notices......... 54 Copyright IBM Corp. 2010 iii

i IBM Tioli Netcool/Proiso: Mass Data Extraction User's Guide

Preface IBM Tioli Netcool Performance Manager is a bundled product consisting of two main components. A wireline component (formerly Tioli Netcool/Proiso), and a wireless component (formerly Tioli Netcool Performance Manager for Wireless). This guide assumes that you are a network administrator or operations specialist who has knowledge of database management. Mass Data Extraction (MDE) is a component of Tioli Netcool/Proiso which is in turn a component of the Tioli Netcool Performance Manager solution. Audience The audience for this manual is an operations specialist who needs make large olumes of data accessible to external applications, such as Business Reporting tools, for the purposes of data warehousing and data modelling. To use the MDE utility it is important that you hae a knowledge of: Netcool/Proiso. DataChannel metrics and attributes. Organization Netcool/Proiso components This guide is organized as follows: Chapter 1, Introduction, on page 1 Chapter 2, MDE setup, on page 5 Chapter 3, MDE API, on page 7 Chapter 4, Mass Data Extraction command-line interface, on page 13 Chapter 5, DataStage integration, on page 39 Chapter 6, MDE Security, on page 43 Netcool/Proiso is made up of the following components: Netcool/Proiso DataMart is a set of management, configuration and troubleshooting GUIs that the Netcool/Proiso System Administrator uses to define policies and configuration, as well as erify and troubleshoot operations. Netcool/Proiso DataLoad proides flexible, distributed data collection and data import of SNMP and non-snmp data to a centralized database. Netcool/Proiso DataChannel aggregates the data collected through Netcool/Proiso DataLoad for use by the Netcool/Proiso DataView reporting functions. It also processes online calculations and detects real-time threshold iolations. Netcool/Proiso DataView is a reliable application serer for on-demand, web-based network reports. Netcool/Proiso Technology Packs extend the Netcool/Proiso system with serice-ready reports for network operations, business deelopment, and customer iewing. Copyright IBM Corp. 2010

The following figure shows the different Netcool/Proiso modules. Netcool/Proiso documentations consists of the following: Release notes Configuration recommendations User guides Technical notes Online help The documentation is aailable for iewing and downloading on the infocenter at http://publib.boulder.ibm.com/infocenter/tiihelp/8r1/topic/ com.ibm.netcool_pm.doc/welcome_tnpm.htm. i IBM Tioli Netcool/Proiso: Mass Data Extraction User's Guide

Chapter 1. Introduction Using the MDE Mass Data Extraction (MDE) is a streaming interface that enables users to extract large olumes of data from the Proiso database. Using MDE it is possible to extract a subset of data, transform the data, and load the data into a customer-specific reporting warehouse. The Mass Data Extraction utility enables the user to gain a copy of a defined set of data for further processing by an external component. The Mass Data Extraction utility is made of three components: A JDBC based API. A command line interface that wraps the API. A set of serices hosted by TIP/eWas, which are accessed by MDE through HTTPS. The MDE API, and hence the CLI, allows you to extract specific resource attributes and metrics. The basic extract types supported are: Raw data extraction between a start and end time. Aggregated data extraction between a start and end time. Resource attribute extraction for a set point in time. Further to these basic extraction types, it is possible to edit the resulting output in the following ways: Data aggregation Set the output type to be XML, CSV, LIF or a schema representing the structure of the requested output. The order of output by resource name or time in ascending or descending order. Group data output into set batch periods. Customize output by adding name and type data. The MDE utility supports data aggregation. The DataChannel component of Netcool/Proiso is responsible for aggregating the raw metric alues into daily, weekly and monthly records. This is done for efficiency purposes to support on-demand reporting. The pre-aggregated data can be used to produce reports more quickly than would be possible if large amounts of the raw data had to be aggregated for each query. During this pre-aggregation process, the DataChannel component is able to do more complex calculations and aggregations on the raw data than would be possible at runtime of the reports. Therefore, more aggregated/statistical alues are supported for a time series with a granularity of a day, week or month. Copyright IBM Corp. 2010 1

For this reason there are two sets of aggregation functions, those that can be applied to any time period of your choosing, and those which must conform to the aggregation periods, or granularity, as set by DataChannel, that is, greater than the period of one day. The aggregated/statistical alues supported by the MDE for granularities of one day or greater are: samplequality: This is the ratio of actual raw alues collected to the expected number of raw alues oer the aggregation period. Sample quality is expressed as (actual count / expected count) * 100. For example, with a polling interal of 15 minutes, the expected count of raw metric alues for a one day period would be 96. If all 96 alues are collected for that one day period, the samplequality would be 100, that is (96 / 96) * 100. percentile: You can use a percentile alue instead of the aerage or max statistics to better represent a metric that includes the occasional burst or spike. Occasional bursts or spikes will render a min or max alue meaningless, and throw off aerage and mean calculations. In these cases, using a percentile calculation allows you to see more accurately how the metric is performing oer time. The MDE API supports the following aggregation functions for all granularities: ag: The aerage of all aggregated alues. min: The minimum of all aggregated alues. max: The maximum of all aggregated alues. sum: The sum of all aggregated alues. count: The number of all aggregated alues. Granularity The granularity, or the time frame oer which statistics are aggregated, can be set: When using the command line, the granularity is defined in the query using the <granularity> tag. For more information on how to set the granularity when using MDE through the command line, see the command line example Time Series: Raw on page 23. When using the API, the granularity is set as the granularity parameter of the getresourceaggregatedmetrictimeseries() function. For more information on how to set the granularity when using the MDE API, see the function description getresourceaggregatedmetrictimeseries() on page 8. The possible leels of granularity are: 5min: This sets the granularity of data aggregation to 5 minutes. This leel of granularity can only be used if the start and end time do not span more than one day. 10min: This sets the granularity of data aggregation to 10 minutes. This leel of granularity can only be used if the start and end time do not span more than one day. 15min: This sets the granularity of data aggregation to 15 minutes. This leel of granularity can only be used if the start and end time do not span more than one day. 2 IBM Tioli Netcool/Proiso: Mass Data Extraction User's Guide

Data customization 20min: This sets the granularity of data aggregation to 20 minutes. This leel of granularity can only be used if the start and end time do not span more than one day. 30min: This sets the granularity of data aggregation to 30 minutes. This leel of granularity can only be used if the start and end time do not span more than one day. 1hour: This sets the granularity of data aggregation to 1 hour. This leel of granularity can only be used if the start and end time do not span more than one week. 2hour: This sets the granularity of data aggregation to 2 hour. This leel of granularity can only be used if the start and end time do not span more than one week. 3hour: This sets the granularity of data aggregation to 3 hour. This leel of granularity can only be used if the start and end time do not span more than one week. day: This sets the granularity of data aggregation to 1 day. This leel of granularity can only be used if the start and end time do not span more than one week. week: This sets the granularity of data aggregation to 7 days. This leel of granularity can only be used if the start and end time do not span more than 90 days. month: This sets the granularity of data aggregation to a month. This leel of granularity can only be used if the start and end time do not span more than 365 days or one year. Further to extracting data MDE allows you to customize the resulting output. There are three main forms of data customization. Creating your own additional data items. Renaming or creating aliases for data items. Sorting data output. Additional data Items It is possible to add bespoke data to your output. This is done by configuring the Name, Type and Result query element attributes. This is described in detail in the Configuring additional element data on page 34 section and in the Adding Alias, Name and Type Data on page 20 section. Aliasing Aliasing allows you to create an entirely new name for a data item. This is described in detail in the section and in Adding Alias, Name and Type Data on page 20. Chapter 1. Introduction 3

Sorting You can set the order of output data. Data can be ordered by resource name or by time. This is described in detail in Sorting on page 20 4 IBM Tioli Netcool/Proiso: Mass Data Extraction User's Guide

Chapter 2. MDE setup A number of setup steps must be carried out in order to use MDE. Procedure 1. Place the mde.tar file that comes with your Netcool/Proiso distribution into c:\temp. 2. Untar mde.tar in this directory. This will create a two folders c:\temp\lib and c:\temp\bin. The lib folder should contain the following resources: c:\temp\lib\commons-codec-1.3.jar c:\temp\lib\datalet.jar c:\temp\lib\mde.jar c:\temp\lib\sql.jar The bin folder should contain scripts for Windows NT and Unix: c:\temp\bin\mde.bat c:\temp\bin\mde.sh 3. Make sure the following enironment ariables are set for MDE: JAVA_HOME: Set the JAVA_HOME enironment ariable to point to the directory where the Jaa runtime enironment (JRE) is installed on your computer. IBM JDK You must ensure you are using the IBM JRE and not the RHEL JRE. The IBM JRE is supplied with the Topology Editor or with TIP. To ensure you are using the right JRE you can either: Set the JRE path to conform to that used by the Topology Editor, do this using the following commands (using the default location for the primary deployer): PATH=/opt/IBM/proiso/topologyEditor/jre/bin:$PATH export $PATH For a remote serer, that is one that does not host the primary deployer, you must download and install the required JRE, and set the correct JRE path. See the Netcool/Proiso Configuration Recommendations document for JRE download details. On the serer hosting the primary deployer, the user could set the JRE path using the following commands (using the default location for the primary deployer.) For a remote machine, that is one that does not host the primary deployer, please consult the Netcool/Proiso Configuration Recommendations Guide for JRE download details. ORACLE_HOME: Specifies the (home) directory where Oracle products are installed, and where the oracle drier is located. If this is not set the user can set MDE_JDBC_DRIVER to point directly to the required JDBC drier implementation. Note: MDE_JDBC_DRIVER should point to the file ojdbc14.jar, which can be taken from an oracle host, where the file is contained at /opt/oracle/product/10.2.0/jdbc/lib/ojdbc14.jar. Copyright IBM Corp. 2010 5

6 IBM Tioli Netcool/Proiso: Mass Data Extraction User's Guide

Chapter 3. MDE API This section describes the Mass Data Extraction (MDE) API. The MDE API forms the foundation for the MDE utility. It is made up of three functions. getresourcemetrictimeseries() The getresourcemetrictimeseries() function allows you to perform raw database extraction. Purpose The getresourcemetrictimeseries() function retriees, through JDBC, raw data for a set of resources in a set of resource groups oer a set time period. getresourcemetrictimeseries (String group, String[] metrictypes, String[] attributetypes, Timestamp starttime, Timestamp endtime, String[] hint) Parameters groups String A resource group name. metrictypes String[] An array of metrics to be retrieed. An entry is either the name of the metric or its id. starttime Timestamp The beginning of the extraction time window. This date is inclusie. endtime Timestamp The end of the extraction time window. hint String The sorting order for the result set. The parameter is optional: ascending(resource): Query results are gien a primary sort by resource in ascending order. The default secondary sort is by time in ascending order. ascending(time): Query results are gien a primary sort by time in ascending order. The default secondary sort is by resource in ascending order. descending(resource): Query results are gien a primary sort by resource in descending order. The default secondary sort is by time in descending order. descending(time): Query results are gien a primary sort by time in descending order. The default secondary sort is by resource in descending order. includenearrealtime: This will cause any other query hint to be ignored and enforce ascending(time) sorting, which by default means the secondary sort is by resource ascending. Copyright IBM Corp. 2010 7

Example The following example extracts metrics and attributes oer a set time period: Note: To ensure the following command does not run off the page, carriage returns hae been inserted. A forward slash, /", identifies where a carriage return has been added. getresourcemetrictimeseries(\"noc Reporting\", [\"AP~Generic~Uniersal~Throughput~Inbound / Throughput (bps)\",\"ap~generic~uniersal~throughput~outbound Throughput (bps)\"], / [\"AP_ifSpeed\",\"AP_ifStatus\",\"AP_ifType\"],/ 2009-12-05 00:00:00.0-0000, 2009-12-05 23:00:00.0-0000, [\"\"]) The sample has requested the following attributes and raw metrics from the NOC Reporting group: The AP~Generic~Uniersal~Throughput~Inbound Throughput (bps) metric. The AP~Generic~Uniersal~Throughput~Outbound Throughput (bps) metric. The AP_ifSpeed attribute. The AP_ifStatus attribute. The AP_ifType attribute. The queryidentifier, Raw_Timeseries, is used in the naming of the output file. Note: This example corresponds to the command line example Time Series: Raw on page 23 getresourceaggregatedmetrictimeseries() The function getresourceaggregatedmetrictimeseries() allows you to perform aggregated data extraction. Purpose getresourceaggregatedmetrictimeseries() retriees, through JDBC, data for a set of resources. This function proides the additional options of aggregating the extracted data, using one of the aggregation types, and also setting the granularity of aggregation, or the time period oer which the selected aggregation operation is applied. getresourceaggregatedmetrictimeseries (String group, String[] metrictypes, String[] attributetypes, String granularity, Timestamp starttime, Timestamp endtime, String[] Hint) Parameters group String A resource group name. metrictypes String[] Array of metric summaries to be retrieed. Each metric summary is specified as follows: <statistic>(<metric path>) Where: <statistic>: can be min, max, sum, count, ag, percentile or samplequality. <metric path>: is the metric, for example, AP~Generic~Uniersal~Throughput~Inbound Throughput (bps). 8 IBM Tioli Netcool/Proiso: Mass Data Extraction User's Guide

The metric type name is either the readable name of the metric or its id. (If id instead of the name is used, it should also be used in the result). The aggregation parameter allows you to define the operation that is carried out on the data for the period as set by granularity. The MDE API supports the following aggregation functions for all granularities: ag: The aerage of all aggregated alues. min: The minimum of all aggregated alues. max: The maximum of all aggregated alues. sum: The sum of all aggregated alues. count: The number of all aggregated alues. For more information on the possible operations that can be performed, see Data aggregation on page 1. attributetypes String[] The array of attributes to be retrieed. An entry is either the name of the attribute or its id. (If id instead of the name is used, it should also be used in the result) granularity String Granularity is expressed as one of the following: 5min, 10min, 15min, 20min, 30min, 1hour, 2hour, 3hour, day, week, month. For more information granularity and its constraints, see Data aggregation on page 1. starttime Timestamp Beginning of the extraction time window. This date is inclusie. endtime Timestamp End of the extraction time window. hint String The sorting order for the result set. The parameter is optional: Example ascending(resource): Query results are gien a primary sort by resource in ascending order. The default secondary sort is by time in ascending order. ascending(time): Query results are gien a primary sort by time in ascending order. The default secondary sort is by resource in ascending order. descending(resource): Query results are gien a primary sort by resource in descending order. The default secondary sort is by time in descending order. descending(time): Query results are gien a primary sort by time in descending order. The default secondary sort is by resource in descending order. includenearrealtime: This will cause any other query hint to be ignored and enforce ascending(time) sorting, which by default means the secondary sort is by resource ascending. This example extracts attributes and aggregated metrics. Note: To ensure the following command does not run off the page, carriage returns hae been inserted. A forward slash, /", identifies where a carriage return has been added. Chapter 3. MDE API 9

getresourceaggregatedmetrictimeseries(\"noc Reporting\", / [\"min(ap~generic~uniersal~throughput~inbound Throughput (bps))\",/ \"max(ap~generic~uniersal~throughput~inbound Throughput (bps))\",/ \"ag(ap~generic~uniersal~throughput~inbound Throughput (bps))/ \",\"sum(ap~generic~uniersal~throughput~inbound Throughput (bps))\",/ \"count(ap~generic~uniersal~throughput~inbound Throughput (bps))\",/ \"samplequality(ap~generic~uniersal~throughput~inbound Throughput (bps))\",/ \"percentile(ap~generic~uniersal~throughput~inbound Throughput (bps))\"],/ [\"AP_ifSpeed\",\"AP_ifStatus\",\"AP_ifType\"], \"day\", / 2009-12-15 00:00:00.0-0000, 2009-12-16 00:00:00.0-0000, [\"\"]) The example extracts the following: From the NOC Reporting group the following attributes and metrics are requested: The min(ap~generic~uniersal~throughput~inbound Throughput (bps)) metric. This metric is wrapped with the min statistical wrapper. The output will be an element containing the minimum result for this metric oer the period of time as set in the granularity element. The max(ap~generic~uniersal~throughput~inbound Throughput (bps)) metric. This metric is wrapped with the max statistical wrapper. The output will be an element containing the maximum result for this metric oer the period of time as set in the granularity element. The ag(ap~generic~uniersal~throughput~inbound Throughput (bps)) metric. This metric is wrapped with the ag statistical wrapper. The output will be an element containing the aerage result for this metric oer the period of time as set in the granularity element. The sum(ap~generic~uniersal~throughput~inbound Throughput (bps)) metric. This metric is wrapped with the sum statistical wrapper. The output will be an element containing the sum of all results for this metric oer the period of time as set in the granularity element. The count(ap~generic~uniersal~throughput~inbound Throughput (bps)) metric. This metric is wrapped with the count statistical wrapper. The output will be an element containing the total number of occurrences of this metric oer the period of time as set in the granularity element. The samplequality(ap~generic~uniersal~throughput~inbound Throughput (bps)) metric. This metric is wrapped with the samplequality statistical wrapper. The output will be an element containing count expressed as a percentage of the number of expected occurrences of this metric. This aggregation type can only be used if the granularity, or period of aggregation, is greater than one day. The percentile(ap~generic~uniersal~throughput~inbound Throughput (bps)) metric. This metric is wrapped with the percentile statistical wrapper. This metric can only be used if the granularity, or period of aggregation, is greater than one day. The AP_ifSpeed attribute. The AP_ifStatus attribute. The AP_ifType attribute. The queryidentifier, ResourcesNocReporting is used in the naming of the output file. Note: This example corresponds to the command line example Time Series: Aggregated on page 25 10 IBM Tioli Netcool/Proiso: Mass Data Extraction User's Guide

getresourcesnapshot() The getresourcesnapshot() function allows you to extract topology data. Purpose The purpose of the getresourcesnapshot() is to get a snapshot of the resources aailable in a set of groups. getresourcesnapshot (String group, String[] attributetypes[],timestamp time) Parameters group String A resource group name. attributetypes String[] The array of attributes to be retrieed. An entry is either the name of the attribute or its id. (If id instead of the name is used, it should also be used in the result) time Timestamp Time of the snapshot; the time at which the resource information is retrieed. If this time is not specified. The most current alues are retrieed. Example The sample takes a snapshot: Note: To ensure the following command does not run off the page, carriage returns hae been inserted. A forward slash, /", identifies where a carriage return has been added. getresourcesnapshot(\"noc Reporting\", / [\"AP_ifSpeed\",\"AP_ifStatus\",\"AP_ifType\"] /,2009-12-05 00:00:00.0-0000) From the NOCReporting group the following attributes are requested: The AP_ifSpeed attribute. The AP_ifStatus attribute. The AP_ifType attribute. Note: This example corresponds to the command line example Snapshot on page 21 Chapter 3. MDE API 11

12 IBM Tioli Netcool/Proiso: Mass Data Extraction User's Guide

Chapter 4. Mass Data Extraction command-line interface Command-line options The Mass Data Extraction (MDE) utility can be run from the command line. Running the MDE from the command line requires two things: An XML data query defining the arious metrics and attributes to extract. An MDE command containing the appropriate arguments. For details on how to use the MDE API, see Chapter 3, MDE API, on page 7 The Mass Data Extraction command-line interface allows you to proide all the details required to perform data extraction. The following table lists all MDE command-line options: Table 1. command-line options Option Format Description -u[ser] username string Username required to execute extraction. -p[assword] password string Password credentials. This argument is optional, as it can be specified as an enironment ariable, MDE_PASSWORD. -host Can be entered as a hostname or an ip(4 5) The name of the host where the serice is running. address. -port port number Access port to the serice. You are allowed to use two ports, one is secure and one is unsecure. By default these are set to: 16315 is the http port, which requires no security. This alue can be configured at install time. 16316 is the https port, which means that it is a secure port and you hae to get a serer side certificate. This alue can be configured at install time. Note: For more information on setting up secure data extraction using MDE, see Chapter 6, MDE Security, on page 43. -q[uery] XML file name The name of the XML file defining the query scope. -s[tart] yyyy-mm-dd't'hh:mm:ss.sz Start time of the extraction window. If not specified, the current time is used. Not proiding a timezone yyyy-mm-dd't'hh:mm:ss.s (Z), defaults to UTC/GMT-0000. yyyy-mm-dd't'hh:mm:ssz yyyy-mm-dd't'hh:mm:ss yyyy-mm-dd't'hh:mmz yyyy-mm-dd't'hh:mm yyyy-mm-ddz yyyy-mm-dd Copyright IBM Corp. 2010 13

Table 1. command-line options (continued) Option Format Description -e[nd] yyyy-mm-dd't'hh:mm:ss.sz End time of the extraction window. If not specified, start alue is used. Not proiding a timezone (Z), yyyy-mm-dd't'hh:mm:ss.s defaults to UTC/GMT-0000. yyyy-mm-dd't'hh:mm:ssz yyyy-mm-dd't'hh:mm:ss yyyy-mm-dd't'hh:mmz yyyy-mm-dd't'hh:mm yyyy-mm-ddz yyyy-mm-dd -o[utput] directory path The output directory path. If this is not supplied, the output will be displayed in the command-line window. -b[atch] 5 min, 10 min, 15min... 60min Segmentation of the output when the output is a file. -f[ormat] cs xml lif schema The file format used for the output. Default is cs. For information on the arious output formats, see Output Conentions and Formats on page 15. -l[ifblock] fixed type Specify LIF block prefixing. This can be a resource type or a fixed prefix. The default is none. This option is only used when the -format option is set to lif. For more information see the following "lifblock" section of Output Conentions and Formats on page 15. -d[ebug] SEVERE WARNING INFO CONFIG / FINE FINER FINEST Specify a debug leel. The default is INFO. Batch mode The MDE utility allows you to batch extracted data. The result of this is, instead of there being one output file, extracted data is split up and output as a set of files, where each file contains data for a set time period. As well as being able to specify the time frame for data extraction using the start and end command line parameters, it is possible to batch the resulting output into multiple files. For example, if the start and end time span one day, you can batch output into one hour groups. The result of this is a discrete set of files where each file contains one hour of data. The adantage of this is that you can create a greater leel of granularity, reduce file size and increase readability. To output your data in batch mode use the command line -b option followed by the batch period in minutes. Limitations The batch period is set in minutes. The batch period can be set between 5 and 60 minutes. 14 IBM Tioli Netcool/Proiso: Mass Data Extraction User's Guide

If only attributes are defined in your query, then any batching information supplied is ignored and the end time will be ignored. An extract of only attributes is called a snapshot. If you set the batch mode to output a file once an hour, the output will be created on the hour obeying the start and end time you set. Output Conentions and Formats This section describes the aailable formats for output files and their conentions. There are four types of output formats aailable: XML (Extensible Markup Language) LIF (Loader Input Format) CSV (Comma Separated Values) Schema XML The following rules and conentions apply to any output saed in the XML format: If a specific resource does not hae a metric or attribute alue, it is not included in the output. Dates and times are expressed as the standard XML/XSL Date/Time type: YYYY-MM-DDThh:mm:ss.SZ (UTC time). The order of the results follows the order of declaration of the attributes and metrics. All output strings are wrapped in the "CDATA" element. The following is an example of XML output. Note: To ensure the following output does not run off the page, carriage returns hae been inserted. A forward slash, /", identifies where a carriage return has been added. <?xml ersion="1.0" encoding="utf-8"?> <response> <result> <granularity>raw</granularity> <header> <meastype indx="4" type="double"><![cdata[raw(ap~generic~uniersal~throughput~inbound / Throughput (bps))]]></meastype> <meastype indx="5" type="double"><![cdata[raw(ap~generic~uniersal~throughput~outbound / Throughput (bps))]]></meastype> <meastype indx="6" type="string"><![cdata[resource[ap_ifspeed]]]></meastype> <meastype indx="7" type="string"><![cdata[resource[ap_ifstatus]]]></meastype> <meastype indx="8" type="string"><![cdata[resource[ap_iftype]]]></meastype> </header> <measinfo> <group><![cdata[noc Reporting~Deices~Interfaces~phlox.galway.ie.ibm.com]]></group> <id><![cdata[host1.us.ibm.com_if<2>]]></id> <time>2010-04-01t00:00:00.0+0000</time> <measvalue indx="4">3678.653908</measvalue> <measvalue indx="5">1723.779153</measvalue> <measvalue indx="6"><![cdata[1000000000]]></measvalue> <measvalue indx="7"><![cdata[up:up]]></measvalue> <measvalue indx="8"><![cdata[ethernetcsmacd]]></measvalue> </measinfo> <measinfo> <group><![cdata[noc Reporting~Deices~Interfaces~phlox.galway.ie.ibm.com]]></group> <id><![cdata[host1.us.ibm.com_if<2>]]></id> <time>2010-04-01t00:15:00.0+0000</time> <measvalue indx="4">2907.169677</measvalue> Chapter 4. Mass Data Extraction command-line interface 15

<measvalue indx="5">1738.927532</measvalue> <measvalue indx="6"><![cdata[1000000000]]></measvalue> <measvalue indx="7"><![cdata[up:up]]></measvalue> <measvalue indx="8"><![cdata[ethernetcsmacd]]></measvalue> </measinfo> <measinfo> <group><![cdata[noc Reporting~Deices~Interfaces~phlox.galway.ie.ibm.com]]></group> <id><![cdata[host1.us.ibm.com_if<2>]]></id> <time>2010-04-01t00:30:00.0+0000</time> <measvalue indx="4">2837.430801</measvalue> <measvalue indx="5">1685.745404</measvalue> <measvalue indx="6"><![cdata[1000000000]]></measvalue> <measvalue indx="7"><![cdata[up:up]]></measvalue> <measvalue indx="8"><![cdata[ethernetcsmacd]]></measvalue> </measinfo>...... LIF LIF is a proprietary TNPM data format. The LIF format is a hierarchical grouping of name alue pairs and fits the nature of the data exported. If a specific resource does not hae a metric, or attribute alue, the field is left blank. Attributes and metric names are translated to respect the restrictions of the LIF format. The restrictions are: Only ASCII characters allowed, any non-ascii characters are replaced with an underscore, that is, "_". Any whitespace is replaced with an underscore, that is, "_". Any occurrence of "[", "]", "{", "}", are replaced with an underscore, that is, "_". Strings are wrapped in single quotes. lifblock When choosing the LIF output format, specifying the -lifblock option allows you to add extra formatting to the output file. We hae two options: Fixed: When choosing this option you must supply a name through the command line, which is gien to each output block. For example: -f lif -lifblock fixed "Fixed_Name" This will result in each block being gien the name "Fixed_Name". This name also has the granularity of the result appended, so the output for each block will look similar to the following: { { Fixed_Name_1hour { resource_group NOC Reporting~MatrixTest~By Region~East resource_id host.us.ibm.com_if<2> timestamp 2009-06-01T12:00:00.0+0300 resource_ap_ifspeed_ 11 sum_outbound_throughput bps 1115.420554 } Type: The type option forces each block to use the defined type element, which is created using the query, to set the block name. This means that the type must be present for each block. If this is not the case then the granularity is used. For more information on how the type element is set for a query, see Adding Alias, Name and Type Data on page 20 16 IBM Tioli Netcool/Proiso: Mass Data Extraction User's Guide

Table 2. CSV Part 1 The following is an example of LIF output. #%npr # { { raw { resource_group NOC Reporting~Deices~Interfaces~phlox.galway.ie.ibm.com resource_id host1.us.ibm.com_if<2> day 01Apr2010 start_time 00:00 resource_ap_ifspeed_ 1000000000 resource_ap_ifstatus_ up:up resource_ap_iftype_ ethernetcsmacd raw_ap_generic_uniersal_throughput_inbound_throughput bps 3678.653908 raw_ap_generic_uniersal_throughput_outbound_throughput bps 1723.779153 measurement_seconds raw } raw { resource_group NOC Reporting~Deices~Interfaces~phlox.galway.ie.ibm.com resource_id host1.us.ibm.com_if<2> day 01Apr2010 start_time 00:15 resource_ap_ifspeed_ 1000000000 resource_ap_ifstatus_ up:up resource_ap_iftype_ ethernetcsmacd raw_ap_generic_uniersal_throughput_inbound_throughput bps 2907.169677 raw_ap_generic_uniersal_throughput_outbound_throughput bps 1738.927532 measurement_seconds raw } raw { resource_group NOC Reporting~Deices~Interfaces~phlox.galway.ie.ibm.com resource_id host1.us.ibm.com_if<2> day 01Apr2010 start_time 00:30 resource_ap_ifspeed_ 1000000000 resource_ap_ifstatus_ up:up resource_ap_iftype_ ethernetcsmacd raw_ap_generic_uniersal_throughput_inbound_throughput bps 2837.430801 raw_ap_generic_uniersal_throughput_outbound_throughput bps 1685.745404 measurement_seconds raw }...... CSV The following rules and conentions apply to any output saed in the CSV format: For the CSV format all strings are wrapped in double quotes. If a specific resource does not hae a metric, or attribute alue, the field is left blank. Time is expressed in the following format: YYYY-MM-DDThh:mm:ss.SZ (UTC time). The following is an example of LIF output. Note: To ensure the following output does not run off the page, the CSV table has been split length-ways in three. group id time NOC Reporting~Deices~Interfaces~host1.us.ibm.com host1.us.ibm.com_if<2> 2010-04-01T00:00:00.0+0000 NOC Reporting~Deices~Interfaces~host1.us.ibm.com host1.us.ibm.com_if<2> 2010-04-01T00:15:00.0+0000 NOC Reporting~Deices~Interfaces~host1.us.ibm.com host1.us.ibm.com_if<2> 2010-04-01T00:30:00.0+0000 Table 3. CSV Part 2 raw(ap~generic~uniersal~throughput~inbound Throughput (bps)) raw(ap~generic~uniersal~throughput~outbound Throughput (bps)) 3678.653908 1723.779153 2907.169677 1738.927532 2837.430801 1685.745404 Chapter 4. Mass Data Extraction command-line interface 17

Table 4. CSV Part 3 resourceap_ifspeed resourceap_ifstatus resourceap_iftype 1000000000 up:up ethernetcsmacd 1000000000 up:up ethernetcsmacd 1000000000 up:up ethernetcsmacd Schema Naming of output files A schema file can only be generated when a query file is defined. The schema is generated by MDE from the query results. This format has been introduced to allow for DataStage integration. For information on how to set up DataStage integration, see Chapter 5, DataStage integration, on page 39 The following is an example of Schema output. Note: To ensure the following output does not run off the page, carriage returns hae been inserted. A forward slash, /", identifies where a carriage return has been added. record(group:string;id:string;time:timestamp[microseconds]; / raw_ap_generic_uniersal_throughput_inbound_throughput bps :dfloat; / raw_ap_generic_uniersal_throughput_outbound_throughput bps :dfloat; / resource_ap_ifspeed_:string;resource_ap_ifstatus_:string; / resource_ap_iftype_:string;) The file naming conentions for output files conforms to the 3GPP format. An example is A20090601.1200+0300-20090602.1200+0300_RawNocReporting.xml. A ery basic breakdown of the 3GPP format is: A: This indicates the file relates to a single network element and single granularity period. 20090601.1200+0300: The start time, date, and time zone offset. 20090602.1200+0300: The end time, date, and time zone offset. RawNocReporting: Is the queryidentifier as set in the query XML. There must be no spaces in the queryidentifier. In the case where batch output is in operation, the file name changes to: <filename>_-_<batchindex>.<format>, for example, A20090601.1200+0300-20090602.1200+0300_RawNocReporting_-_001.xml In the case where a file is duplicated, the file name changes to: <filename>_<duplicateindex>.<format>, for example, A20090601.1200+0300-20090602.1200+0300_RawNocReporting_1.xml In the case where there are duplicate batch files, the file name changes to: <filename>_<duplicateindex>_-_<batchindex>.<format>, for example A20090601.1200+0300-20090602.1200+0300_RawNocReporting_1_-_001.xml When files are created they are placed in a temporary folder until all processing is complete, then they are relocated to the output location specified. 18 IBM Tioli Netcool/Proiso: Mass Data Extraction User's Guide

Creating a query Eery MDE command line interface command must be accompanied by a query definition. This section describes how to create that query. A Basic Query The MDE query allows you to specify the attributes and metrics to extract. A basic query requires the following tags: <query>: This tag is used to wrap all other tags and indicate that they form a query. <resourcegroup>: This tag is used to indicate where the required attributes or metrics can be found. This resource group is the top node within the network element tree we are exploring. <attribute>: This tag is used to identify a desired attribute. Your query can contain a minimum of 1 and an unbounded maximum of these tags. <metric>: This tag is used to identify a desired metric. Your query can contain a minimum of 0 and a unbounded maximum of these tags. Group, metric and attribute naming To create well formed queries it is important that the correct names for groups, metrics and attributes are used. These names can be obtained by querying the database directly, or by using the DataMart tool Resource Editor. The Resource Editor proides the full group and metric paths, and attributes on a resource. Limitations It is recommended that the number of metric/statistic combinations requested in a single execution of the query be limited to 30-50 combinations. Such a limit should fit within the physical configuration, block size, of the Oracle database. There are circumstances where its necessary to increase the maximum heap size setting: MDE JVM encounters an "out of memory" exception A large olume of data is to be processed The MDE launch script (mde.bat/mde.sh) configures the JVM heap size with a default minimum size of 128 Mb (-Xms128m) and a default maximum of 1 Gb (-Xmx1g). These settings are appropriate for most systems that may run MDE. -Xms128m: This is the minimum amount of memory that the JVM needs to allocate to the heap before starting -Xmx1g: This is the maximum amount of that the JVM needs to resere/allocate to the heap before starting. Please note that this memory must be aailable for JVM to start. These settings can be edited directly within the MDE script you are using, either of mde.bat or mde.sh. Adanced Settings Within a query you can also define: The sorting of output. For more information see Sorting on page 20. Chapter 4. Mass Data Extraction command-line interface 19

Additional type and name elements, plus alternatie names, or aliases, for any attribute or metric. For more information see Adding Alias, Name and Type Data The following sections will explain how a query can be defined to incorporate these settings. Sorting To set the sorting order of query results, include the <hint> tag. MDE supports the primary and secondary sorting of result elements. The primary sorting order is set using the query <hint> tag. The secondary sorting order is set to a default based on the primary. If no hint is set in the query, the default sorting is resource in ascending order. Only one query hint should be set. Content for the <hint> tag is limited to the following: ascending(resource): Query results are gien a primary sort by resource in ascending order. The default secondary sort is by time in ascending order. ascending(time): Query results are gien a primary sort by time in ascending order. The default secondary sort is by resource in ascending order. descending(resource): Query results are gien a primary sort by resource in descending order. The default secondary sort is by time in descending order. descending(time): Query results are gien a primary sort by time in descending order. The default secondary sort is by resource in descending order. includenearrealtime: This will cause any other query hint to be ignored and enforce ascending(time) sorting, which by default means the secondary sort is by resource ascending. If resource is set as the primary sorting hint, then time is set as the secondary sort by default. If time is set as the primary sorting hint, then resource is set as the secondary sort by default. Adding Alias, Name and Type Data It is possible to add an alias to a result item, and add name or type elements to the result set. Aliasing has been made possible to allow you to add a more informatie name to a result element. Network elements within proiso contain no name or type data; they do not hae a specific name element or type element. For this reason it has been made possible to create a name or type element from an existing attribute result. An Alias can be applied to both metrics and attributes. Name, Type, Result attributes can only be specified on attribute elements. The XML schema defining the query structure is: <!-- Metric Type --> <xsd:complextype name="metrictype"> <xsd:simplecontent > <xsd:extension base="xsd:string"> <xsd:attribute name="alias" type="xsd:string" /> 20 IBM Tioli Netcool/Proiso: Mass Data Extraction User's Guide

</xsd:extension> </xsd:simplecontent> </xsd:complextype> <!-- Attribute Type --> <xsd:complextype name="attributetype"> <xsd:simplecontent> <xsd:extension base="xsd:string"> <xsd:attribute name="alias" type="xsd:string" /> <xsd:attribute name="name" type="xsd:boolean" /> <xsd:attribute name="type" type="xsd:boolean" /> <xsd:attribute name="result" type="xsd:boolean" /> </xsd:extension> </xsd:simplecontent> </xsd:complextype> The XML schema lists the following attributes for the Attribute elements: alias: Setting this element attribute for a metric or attribute allows you to define a new name under which corresponding results will be listed. Name: Setting this to "true" for an attribute means the result, should it exist, is added to the result set and used as the name of the result block. The default setting is "false". Only one should be set per query. Type: Setting this to "true" for an attribute means the result, should it exist, is added to the result set and used as the type for the result block. The default setting is "false". Only one should be set per query. Result: Setting this to "false" causes the attribute to be remoed from the result output. Typically, this is used in conjunction with either Name or Type, if the user only wants the result to be used to proide type or name info, and does not want the results to be displayed under their own name. The default setting is "true". Note: For a detailed example of how these settings are used, see Alias on page 32 and Configuring additional element data on page 34. MDE Command Line Examples This section documents a set of MDE command line examples. Each example contains: The command The required query definition. An output excerpt. There are fie command line examples. Snapshot The MDE command line snapshot example. Purpose The purpose of a snapshot is to return attribute data for a set point in time. A snapshot is what is automatically returned when the user defines a query containing only attributes. Any supplied batching information or end time will be ignored. Chapter 4. Mass Data Extraction command-line interface 21

As with all the MDE examples there are two things you must define to gain the required output: The command, which allows you to supply details of the application serer, access credentials, timing, etc. The query, which allows you to specify the desired attributes, as well as sorting, aliasing and more. Command Line Note: To ensure the following command does not run off the page, carriage returns hae been inserted. A forward slash, /", identifies where a carriage return has been added. mde -u tipadmin -p tipadmin -host host.us.ibm.com -port 16315/ -query..\queries\snapshotquery.xml -o..\output\ -f xml Note: The mde command string used in the sample is mde.sh for UNIX and mde.bat for Windows. The following is also specified in the command: -u tipadmin: The username is set to tipadmin. -p tipadmin: The password is set to tipadmin. -host host.us.ibm.com: The application serer name is set to host.us.ibm.com. -port 16315: The access port is set to 16315. -query..\queries\snapshotquery.xml: The query file containing the attributes and metrics to be extracted is set to SnapshotQuery.xml. -o..\output\: The output directory is set to be the..\output\ directory. -f xml: The output format is set to be xml. Note: No start or end time were stated in the command, which means the start time will be taken as the time the command was executed and the most current data will be extracted. Query In the following query only attributes are defined, and are, therefore, part of our extract. When only attributes are defined in the query, the extract is a snapshot. <?xml ersion="1.0" encoding="utf-8"?> <query> <resourcegroup>noc Reporting></resourcegroup> <attribute>ap_ifspeed</attribute> <attribute>ap_ifstatus</attribute> <attribute>ap_iftype</attribute> <queryidentifier>snapshot</queryidentifier> </query> From the NOCReporting group the following attributes are requested: The AP_ifSpeed attribute. The AP_ifStatus attribute. The AP_ifType attribute. The queryidentifier, Snapshot is used in the naming of the output file. Output Excerpt The output is displayed in XML format. 22 IBM Tioli Netcool/Proiso: Mass Data Extraction User's Guide

<?xml ersion="1.0" encoding="utf-8"?> <response> <result> <granularity>raw</granularity> <header> <meastype indx="4" type="string"><![cdata[resource[ap_ifspeed]]]></meastype> <meastype indx="5" type="string"><![cdata[resource[ap_ifstatus]]]></meastype> <meastype indx="6" type="string"><![cdata[resource[ap_iftype]]]></meastype> </header> <measinfo> <group><![cdata[noc Reporting~Deices~IP Deice]]></group> <id><![cdata[172.16.100.100_<null>]]></id> <time>2010-02-16t17:35:05.0+0000</time> <measvalue indx="5"><![cdata[up]]></measvalue> </measinfo> <measinfo> <group><![cdata[noc Reporting~Interfaces~Aailability~172.16.100.100]]></group> <id><![cdata[172.16.100.100_<null>]]></id> <time>2010-02-16t17:35:05.0+0000</time> <measvalue indx="5"><![cdata[up]]></measvalue> </measinfo>... <measinfo> <group><![cdata[noc Reporting~Interfaces~Packet Throughput~ethernetCsmacd]]></group> <id><![cdata[host.us.ibm.com_if<2>]]></id> <time>2010-02-16t17:35:05.0+0000</time> <measvalue indx="4"><![cdata[100000000]]></measvalue> <measvalue indx="5"><![cdata[up:up]]></measvalue> <measvalue indx="6"><![cdata[ethernetcsmacd]]></measvalue> </measinfo> </result> </response> The three index headers are stated at the beginning so that the naming information will not hae to be repeated throughout the output file. The header information is wrapped in the <header> tag. All measurement information is wrapped in the <measinfo> tags. Time Series Examples The time series examples demonstrate how to extract data for a gien time period. There are two time series examples. Time Series: Raw The MDE command line raw time series example. Purpose The purpose of this example to demonstrate how to gather raw data for a set time period. As with all the MDE examples there are two things you must define to gain the required output: The command, which allows you to supply details of the application serer, access credentials, timing, etc. The query, which allows you to specify the desired metrics and attributes, as well as sorting, aliasing and more. As part of this MDE example, a short excerpt from the output file is displayed. Chapter 4. Mass Data Extraction command-line interface 23

Command line Note: To ensure the following command does not run off the page, carriage returns hae been inserted. A forward slash, /", identifies where a carriage return has been added. mde -u tipadmin -p tipadmin -host host.us.ibm.com -port 16315 -s 2009-12-05T00:00:00.0+0000/ -e 2009-12-05T23:00:00.0+0000 -query..\queries\rawtimeseriesquery.xml -o..\output\ -f xml Note: The mde command string used in the sample is mde.sh for UNIX and mde.bat for Windows. The following is specified in the command: -u tipadmin: The username is set to tipadmin. -p tipadmin: The password is set to tipadmin. -host host.us.ibm.com: The application serer name is set to host.us.ibm.com. -port 16315: The access port is set to 16315. -query..\queries\rawtimeseriesquery.xml: The query file containing the attributes and metrics to be extracted is set to RawTimeSeriesQuery.xml. -s 2009-12-05T00:00:00.0+0000: The start time is set to 2009-12- 05T00:00:00.0+0000. -e 2009-12-05T23:00:00.0+0000: The end time is set to 2009-12- 05T23:00:00.0+0000. -o..\output\: The output directory is set to be the..\output\ directory. -f xml: The output format is set to be xml. Query In the following query both attributes and metrics are defined. <?xml ersion="1.0" encoding="utf-8"?> <query> <resourcegroup>noc Reporting</resourcegroup> <metric>ap~generic~uniersal~throughput~inbound Throughput (bps)</metric> <metric>ap~generic~uniersal~throughput~outbound Throughput (bps)</metric> <attribute>ap_ifspeed</attribute> <attribute>ap_ifstatus</attribute> <attribute>ap_iftype</attribute> <queryidentifier>raw_timeseries</queryidentifier> </query> From the NOC Reporting group the following attributes and metrics are requested: The AP~Generic~Uniersal~Throughput~Inbound Throughput (bps) metric. The AP~Generic~Uniersal~Throughput~Outbound Throughput (bps) metric. The AP_ifSpeed attribute. The AP_ifStatus attribute. The AP_ifType attribute. The queryidentifier, Raw_Timeseries, is used in the naming of the output file. Note: Multiple occurrences of the same metric in a query will produce only one result. Output Excerpt <?xml ersion="1.0" encoding="utf-8"?> <response> <result> <granularity>raw</granularity> <header> 24 IBM Tioli Netcool/Proiso: Mass Data Extraction User's Guide