Dino Explorer Suite. User's Guide version 6.2.3

Size: px
Start display at page:

Download "Dino Explorer Suite. User's Guide version 6.2.3"

Transcription

1 Dino Explorer Suite User's Guide version 6.2.3

2 Contents Introduction 4 Mainframes... 4 The SMF... 5 Dino architecture... 6 Dino database... 7 MVS agents... 7 Server utilities... 8 Other utilities... 8 Data loading... 9 Dino portal Dino Explorer products 11 CPU Explorer CICS Explorer DASD Explorer Dataset Explorer DB2 Explorer IO Explorer MSU Explorer Dino Smart Common Features 108 Query interface Filters Grouping Reports Configuration Import and Loading Data 136 Import from MVS Server Import from CSV File Load Views Configuration Tasks 151 Administrative Tools 151 Actions Dino Explorer Suite User's Guide v

3 View Log Swap data Compressing and Purging Data 154 Compress data Purge records Dasd Discovery 160 Load configuration Update device configuration Set current configuration Configuration 164 Database Connection Product License MVS Servers Loader fields configuration DinoMessaging Service DinoUtil CLI interface Appendix 171 Database Tables DinoCmd Query CLI interface Portal Customization Query names Glossary 207 Dino Explorer Suite User's Guide v

4 Introduction Dino Explorer Suite is a set of products aimed to manage the workload of complex IBM Mainframes running z/os operating system. These mainframe computers are powerful computer that support the core applications of the largest corporations around the world. Mainframes The mainframes are specialized computers that can run thousands of parallel programs and perform a huge amount of I/O. We are talking about thousands of transactions executions and many gigabytes of data transferred in a single second. As you may expect, these machines are far more expensive than normal PC and servers. So, a mainframe system normally: shared by many applications; runs close to 100% CPU busy; runs around the clock (24 x 7); has accounting and charge-back policies; is constantly monitored. The main idea is: "Run everything on the smallest box" because the bigger is the mainframe, much higher is the costs involved. And the most surprisingly is that software costs (licenses) are far more expensive than the hardware and their price is based on the size of the box. So, managers are always worried about the mainframe, either because the system is down due to a mainframe failure, or because costs implications due to hardware upgrades. To support many applications on the same box, the organizations were forced to organize their workload creating rules for naming system components such as program names, filenames, job names, system names. Through these rules, it is possible to account resource usage and assign it to the corresponding application or business area. System resources Mainframes are like any other computer, they are composed of CPU's, memory (RAM) and I/O devices such as disks (DASD), tapes, printers and network adapters. These are the system resources. And the operating system (z/os) accounts how much of each resource was used by each program and register it on a log file (SMF). Same examples of system resources are: CPU time; number of I/O operations (EXCP's); number of jobs executions; number of file access; service units. Counting the system resources utilization among the various applications, we can charge-back the mainframe and appropriate the corresponding costs accordingly with its usage. It is a simple way to share the costs. And this is exactly what Dino Explorer does to you, count these counters. Dino Explorer Suite User's Guide v

5 The SMF The System Management Facilities (SMF) is one of main components of the z/os operating system that is responsible to gather and manage the logs (SMF records) that are generated from the various components of the mainframe. The SMF is the central repository for mainframe events. Each event has a tag associated to identify its format, the SMF record type. The following table show some records types: SMF type Function Description 14 Non-VSAM input A file opened for reading has been closed and this record shows all details about who and how it was accessed 15 Non-VSAM output Similar information for files open for writing Job or step statistics Details about jobs and programs running on the mainframe 42.6 SMS Dataset Dataset IO statistics per intervals 61 Catalog Dataset Catalog dataset operations 64 VSAM close Inserts, deletes, retrieves counters from indexed VSAM files 65 Delete Dataset Delete dataset operations 66 Alter Dataset ICF catalog alter operations 70.1 Processor Activity RMF processor activity 73 Channel Path Activity RMF channel path activity per intervals 74.1 Device Activity RMF device IO statistics per intervals 74.5 Cache Activity RMF cache subsystem device activity 78.3 LCU Activity RMF LCU / Hyper PAV activity per intervals 101 DB2 Accounting Accounting record for each thread execution. 110 CICS transactions Transactions execution details such as CPU time, duration and bytes transmitted The installations save the SMF events on special files, SMF dumps, that are datasets in VBS format (variable spanned records) and these files normally stays a few days on disk and them they are copied to tapes where they are retained for many years or decades. To accommodate these records, we need a lot of storage (disk and tape). We are talking about millions or perhaps billions of records in a single day on the large financial companies. Hopefully, the system administrator can select which records types will be persisted, but some companies prefer not to process this information to avoid all this work. Dino Explorer Suite User's Guide v

6 SMF records are very rich in details and they are the main source of information to the Dino Explorer. A lot of effort has been done to collect these records in an efficient way and avoiding the overhead on the mainframe to process it. Manage out of the box As you may notice, the mainframes are very special boxes, but they are expensive and they should be used to process the core business and not to any application. There are many installations that spend more resources trying to figure out what is going on inside the box than producing real work. We believe that some monitoring and management functions should run outside the mainframe and let it free to run more business applications. So, the idea is just get the information from the mainframe and do all the processing of this information on another computer in a platform more suitable to process it. Currently we run on the windows platform. Dino architecture The events are downloaded to the Dino Explorer database (Dino DB) that resides on the open platform database. In this architecture, the queries do not interfere on the mainframe workloads because they run on the server (database) instead of the mainframe. On the mainframe runs an agent responsible to collect the events and transfer them to the Dino database. Besides these agents, all the components of the Dino Explorer access just the Dino database. Follows a brief introduction to the main components of the Dino Explorer Suite: 1. Dino database; 2. MVS agents; 3. Server utilities; 4. Other utilities; 5. Data Loading; 6. Dino Explorer products; 7. Dino portal. Dino Explorer Suite User's Guide v

7 Dino database The Dino database is the core of the Dino Explorer Suite where all configuration and data is stored. The Dino Explorer is a client-server application and the only information you need to provide is the location of the database, i.e. the connection string. The DinoDB is an historical database of the mainframe events, so you can easily: keep track of evolution; compare with the past; account the usage; predict the future; MVS agents The Dino Explorer MVS Agents are the components responsible to collect the information on the mainframe to be loaded on the Dino Database. These are the only software that runs on the mainframe, all other components of the Suite run on the open platform. We can intercept the events on the mainframe into two ways: batch and in real-time and currently we have three levels of MVS agents: Type MVS Component Description Batch SMF Collector DXPLSMF batch program that reads SMF dump files and generate a CSV file with the events Real-time Dino TCP Server Implements a TCP server on the mainframe to respond to Dino Requests from the Dino Server utilities Dino Messaging Intercepts the events in real-time and buffers them on memory to be downloaded to the open platform using the TCP Server facility Dino Explorer Suite User's Guide v

8 Server utilities The Dino Server utilities are administrative utilities responsible to database maintenance and data loading. The following table describes the Server utilities: Server utility DinoSetup DataLoader DinoMessaging DinoUtil Description Dino database logical initialization. Users may run this utility after you install a new server version Administrator s tool: configuration, load data, view logs and so on. Windows service responsible to download real-time events from the mainframe Command line utility used normally to automate loading tasks The most important task is to load the events from the mainframe on the Dino Database and we can do this task reading from a file or getting them directly from the mainframe through TCP/IP connections to the Dino Messaging tasks running on the mainframe. Other utilities The below table presents the general-purpose utilities: Utility DinoTask DinoCmd Description Creates User Input type events to be monitored on the Application Impact Monitoring (AIM) from tasks outside the mainframe. You can these events to your production scripts Command line interface to run Dino queries from your scripts Dino Explorer Suite User's Guide v

9 Data loading Data loading is the process of collecting the events on the mainframes and inserting that information on the Dino database. The way we feed the database is fundamental to the functionality we are going to use. The faster and cheaper way to load data into the Dino DB is certainly is the real-time process using the Dino Messaging facility compared to the traditional batch and file transfer way. Take a good look on the following diagram comparing both process: The green path represents the real-time data loading. As you can see, the path is short and avoid at least eight I/O operations per SMF record. Can you imagine how many I/O operations and storage space you can save using the Dino Messaging. On the other way, around, the batch collector, the program DXPLSMF, reads the SMF files and create a CSV file (comma separated values) with the important information extracted from the SMF records. These files are transferred to the open platform using FTP or any other file transfer product. The biggest advantage on the batch method is that you do not need to install the messaging components on the mainframe. You can also run the SMF collector from another mainframe partition that can access the SMF dump files. In this way, you can collect the mainframe events without any interference on the environment. Dino Explorer Suite User's Guide v

10 Dino portal The Dino Portal is a web application infrastructure that enables you to publish information from the Dino Explorer using your own organization culture to: create business views to your managers; mixing company data with Dino historical data; build complex reports; build charts and real-time monitors. Follows some examples: You can run queries on the Dino database directly from your programs using the support classes to Dino (application program interface) very easily as you do on any of the Dino product interfaces. Currently these classes are implemented in C# language for.net platforms. Follows a query example extract from the Dino portal: Dino Explorer Suite User's Guide v

11 Dino Explorer products This section covers in details the main features of each Dino Explorer product. All Dino Explorer products are designed to work as query builders of the Dino database, to make the tasks that the users should do with SMF information easier. Each Dino Explorer product is a specific view of your mainframe and each one works with a specific set of SMF records. The main objective of the Dino Explorer products is to work with SMF records out of the 'box', saving expensive, limited and important resources that normally are wasted on the mainframe machines, because those resources could be being used to do the real company work, instead of working with manipulation of SMF log data. Next, you will learn how each Dino Explorer product works in deep and what each one can do to help you when you be faced to work with SMF records. Dino Explorer products is based on 4bears Technologies proprietary, non-intrusive technology, which demands virtually no mainframe CPU cycles. The Dino Explorer Suite is currently composed with the following products: CICS Explorer CPU Explorer DASD Explorer Dataset Explorer DB2 Explorer IMS Explorer IO Explorer MSU Explorer SMART CICS transaction executions System resource usage by programs, jobs, applications, LPARs, sysplex and users DASD storage configuration and space management: storage groups, channel paths, chpid, sub-systems VSAM and non-vsam dataset usage Keeps track of your DB2 accesses from CICS, IMS transactions and Open platform access (DRDA) IMS transactions executions I/O usage per device address, tapes, jobs, programs Analyze consumed information used for WLC charge Real-time monitor for jobs and started tasks (STC) Each product is related to a specific subject, but all share the same database (dinodb) and they have the same structure and interface. Dino Explorer Suite User's Guide v

12 CPU Explorer The CPU Explorer is an analytic tool that allows users to track and analyze the usage pattern and trends of mainframe resource in an effective and straightforward way. Its main function is submitting queries to Dino database about CPU utilization and about all jobs that are running or already been executed on the mainframe. There are several relevant tasks that users can perform with this powerful tool: Track mainframe workload over a given time horizon, based on historical records; Verify resource usage based on Jobs, Users or a specific Program; Identify Jobs or Programs that consume the most resources; Measure resources spent with systems with excessive numbers of ABENDs or unsuccessful executions; Assess the workload trends and perform through analyses on the programs executions patterns; Build assessment and billing reports based on system resource usage; Measure data from general purpose processors, LPARs and coupling facilities; Verify processor usage by the entire CEC or by its LPAR's individually. The CPU Explorer main window is shown below: Dino Explorer Suite User's Guide v

13 The CPU Explorer data is derived from the SMF records described below. SMF Type (Dec) SMF Type (Hex) SMF Sub type Description 30 1E Common Address Space Work 1 Job or Task Initiation 2 Interval Termination 3 Last Interval Termination 4 Step Termination 5 Job or Task Termination 6 System Address Space Interval Termination RMF/CMF Processor Activity 1 CPU, PR/SM and ICF Activity The CPU Explorer product has the following menu items: Icon Title Description Data Jobs LPAR Programs Users Services Performance Real-time imported data. As soon the information is imported on the database, you can see it in this menu. Note: Normally the end of the Load Views process delete all the data in this view (truncated). Queries about jobs. Note: In z/os everything is job: TSO user, started task (STC), OMVS session Historical information about LPARs (z/os system). This is the fastest way to get historical data about a z/os system because this view has much less records than the other. So, start your queries here and then go to the other views when you know the period you want to dig. Queries about program executions (step of a job). From this view, you can easily discovery the job that execute this program: inform the program name and select a grouping with jobname and you will get a result line for each jobname that execute your program. Note: Normally this is the biggest view in CPU Explorer Query historical information about users Note: For future use Queries about system MSU consumption: 4-hour, defined capacity, CPU times, number of processors, Ziip CPU time Each menu item has a set of submenu items. Each submenu item represents a specific query to be submitted to the Dino database. The query will be referenced now as a report. Dino Explorer Suite User's Guide v

14 Data menu Information are inserted into Dino database by two process: data importation and data loading. The first process executed is the data importation that inserts raw data into work tables called imported 'data' tables. Then the data loading formats the raw data and loads the formatted data into the related historical tables. The reports of this menu item will submit queries to the tables used by the data importation process. The data menu reports are listed below: Icon Report Description List details Summary per partition Summary per sysplex Total summary CPU activity per LPAR CPU activity per CEC List imported data records. Query records exactly as they were imported View imported data summary per partition. Summarizes resource usage counters grouped by sysplex, system name, SID (LPAR) View imported data summary per sysplex. Summarizes resource usage counters grouped by sysplex View imported data total summary. Summarizes resource usage counters for all imported records List CPU activity imported data per LPAR. Summarizes resource usage counters grouped by LPAR List CPU activity imported data per CEC. Summarizes resource usage counters grouped by CEC Dino Explorer Suite User's Guide v

15 Jobs menu In this topic, we can accompany the use of resources of the jobs and steps executed as well as their behavior during their execution in the form of groupings and visions for a better understanding. The jobs menu reports are listed below. Icon Report Description Steps executions Jobs executions List details Summary per partition Summary per sysplex Total summary Top jobs Averages report View jobs steps executions. Query steps executions records exactly as they were imported from CSV file View jobs executions. Query jobs executions records exactly as they were imported from CSV file List imported data records. Query records exactly as they were imported View imported data summary per partition. Summarizes resource usage counters grouped by sysplex, system name, SID (LPAR) View imported data summary per sysplex. Summarizes resource usage counters grouped by sysplex View imported data total summary. Summarizes resource usage counters for all imported records List top jobs executions based on a selected resource usage counter List jobs executions average. Calculates resource usage counter average LPAR menu The LPAR view allows us to have a good idea of resource consumption and behavior of the various processes that are run behind this environment and through the groupings we have a photograph of its partition inside the sysplex and in relation to other partitions. The following table shows the reports for the LPAR menu. Icon Report Description List details Summary per partition Summary per sysplex Total summary List LPAR records. Query historical LPAR records View executions summary per partition. Summarizes resource usage counters grouped by sysplex, sysplex name, SID (LPAR) View executions summary per sysplex. Summarizes resource usage counters grouped by sysplex View total summary for all partitions. Summarizes resource usage counters for all executions records Dino Explorer Suite User's Guide v

16 Programs menu The program view allows you to track the use of the resource used by each program and its behavior, the groupings give an overview in which environments are executed LPAR, SYSPLEX and we can compare their behavior in relation to other programs. The following table shows the reports for the programs menu: Icon Report Description List details Summary per partition Summary per sysplex Total summary Top executions List programs records. Query historical programs records View programs executions summary per partition. Summarizes resource usage counters grouped by sysplex, system name, SID (LPAR) View programs executions summary per sysplex. Summarizes resource usage counters grouped by sysplex View programs total summary. Summarizes resource usage counters for all programs execution records List top programs executions based on a selected resource usage counter Users menu The users view allows you to track the processes performed by the users and the consumption of resources used, the groupings give an overview in which environments the users connected LPAR, SYSPLEX and compare their consumption of resources with other users. The following table shows the reports for the user s menu: Icon Report Description List details Summary per partition Summary per sysplex Total summary List user s records. Query historical user s records View user s executions summary per partition. Summarizes resource usage counters grouped by sysplex, system name, SID (LPAR) View user s executions summary per sysplex. Summarizes resource usage counters grouped by sysplex View user s executions total summary. Summarizes resource usage counters for all user s execution records Dino Explorer Suite User's Guide v

17 Services menu Services are information consolidated by their common features during the data loading phase. For example: jobs names started by "SL" belong to "Sales" service. The following table shows the reports for the services menu: Icon Report Description List details Summary per partition Summary per sysplex Total summary List services records. Query historical services records View services executions summary per partition. Summarizes resource usage counters grouped by sysplex, system name, SID (LPAR) View services executions summary per sysplex. Summarizes resource usage counters grouped by sysplex View services executions total summary. Summarizes resource usage counters for all services execution records Performance menu The reports under performance menu item query historical information about RMF processor activity. The following table shows the reports for the performance menu: Icon Report Description LPAR Activity Sysplex Activity CEC Activity Total Activity View processor activity summary per partition View processor activity summary per sysplex View processor activity summary per CEC View processor activity summary for all CECs Dino Explorer Suite User's Guide v

18 CICS Explorer The CICS Explorer is an analytic tool that allows users to track and analyzed the usage pattern and trends of mainframe CICS transactions resources in an effective and straightforward way. There are several relevant tasks the users can perform with this powerful tool: Track CICS transactions workload over a given time horizon, based on historical records; Verify resource usage based on CICS instance, Transactions, Terminals, Jobs, Logical Partitions, Programs and Users; Identify transactions that consume the most CICS resources. The CICS Explorer main window is shown below: Dino Explorer Suite User's Guide v

19 The CICS Explorer data is derived from the SMF records described below. SMF Type (Dec) SMF Type (Hex) SMF Sub type 110 6E CICS/TS Statistics 1 Monitoring Data Description Data menu The Data menu is the real-time data that you are receiving from the mainframe since the last Load Views operation. All the queries in the Data menu refers to the CICS Imported Data target where there is a record for each CICS transaction execution. The following table describes the queries available: Icon Report Description List details Summary per partition Total summary Top transactions Response times Performance Report List the data records. Each record represents one CICS transaction. Through this query you get all data (fields) available about the transaction. Caution: You may run out of memory! Summarizes all CICS transaction executed on the same LPAR (SID). In this query, you get one line for each LPAR per selected interval: minute, hourly, daily Summarizes all CICS transaction executions. Returns a single line per interval selected List top transactions executions based on a selected resource usage counter such as: duration, CPU time, DB2 calls, Lock time. You can limit the number of transactions returned (Max, Quantity) Summarized the transactions by duration time in 10ms intervals such as: 00:00.01, 00:00.02, 00:00.03 Calculate performance rates for the CICS transactions (by transaction code) such as: transactions per second (TPS), response times, wait times Dino Explorer Suite User's Guide v

20 Common history queries Independently of which menu you select in CICS Explorer, you always get data from the same place (target or database table). The only difference is the grouping used on the query: Icon Menu Grouping field Description Cics Cics Name Logical CICS region name (LU6.2 name) Transactions Transaction Transaction code (4-char) Terminal Terminal Terminal or session Jobs Job name CICS region job name LPAR SID System Identification, 4-char identification of a z/os system Program Program name The name of the initial program that a transaction executes Users User Username context The following table show the common queries: Icon Report Description List details Summary per partition Total summary List CICS history records. Each record may represent the execution of several similar transactions that occurs on the same interval. The field Transactions shows the number of transactions summarized on the record and all the counters are the totals of the single executions counters. Summarized the historical data and group by SID, so you get one result line for each LPAR. In this query, whatever grouping you selected, you will still get grouped by SID. Summarized all records into a single result line per interval. Dino Explorer Suite User's Guide v

21 Transactions menu In this menu, the central point is transaction code. The transactions menu reports are listed below: Icon Report Description Transactions executions List details Summary per partition Total summary Averages report Performance report List the transactions executions summarized or in detail if you drill down a result line. You just get data in this query if the Load Transaction Executions is selected in Data Loader load configuration. See Common history queries See Common history queries See Common history queries The average report query calculates the average of all counter fields dividing the total by the number of executions. In this report, you get the transaction profile like: duration time, number of DB2 calls and so on. Calculate performance rates for the CICS transactions (by transaction code) such as: transactions per second (TPS), response times, wait times. Note: To get data in this query, you need to select Minute interval into the CICS History load views. Dino Explorer Suite User's Guide v

22 DASD Explorer The DASD Explorer is an analytic toll that allows users to track and analyze the usage of DASD volumes for IBM mainframe computer. There are several relevant tasks the users can perform with this powerful tool: Track volumes workload over a given time horizon, based on historical records; Verify volumes resource usage organized by Logical Partitions and Storage groups; Check and discovery for DASD capacity, Physical paths to devices, LPAR paths to devices, Volumes per LCU (Logical Control Unit), Volumes sharing the same CHPID (Channel Path ID), Volume occupancy and Volume characteristics such as supplier. The DASD Explorer main window is shown below: Dino Explorer Suite User's Guide v

23 The DASD Explorer data is derived from the SMF records described below from DCOLLECT "V" records. SMF Type (Dec) SMF Type (Hex) SMF Sub type Description RMF/CMF Channel Path Activity 74 4A RMF/CMF Resource Activity 1 Device Activity 5 Cache Subsystem Device Activity 78 4E RMF/CMF Virtual Storage & I/O Activity 3 I/O Queuing & HyperPAV Activity The DASD Explorer product has the following menu items: Icon Title Description Data Physical Logical Views Storage Array LPAR CEC Storage Group Volume Query the raw imported records. Non-historical data Storage occupation and configuration from a storage perspective. Based on storage discovery functions (DASD Load Configuration). Storage occupation and configuration from a z/os perspective (LPAR's, DEVNUM). Based on storage discovery functions (DASD Load Configuration) Storage configuration explorers based on storage discovery functions (DASD Load Configuration) Historical information from Storage arrays perspective. Based on RMF data Historical information from LPAR's perspective. Based on RMF data Historical information from CEC or CPC perspective. Based on RMF data Historical information from SMS Storage Groups perspective. Based on RMF data Historical information from a single volume perspective. Based on RMF data Each menu item has a set of submenu items. Each submenu item represents a specific query to be submitted to the Dino database. The query will be referenced now as a report. Dino Explorer Suite User's Guide v

24 Data menu On the Data menu, you find the information that have been downloaded from the mainframe and not yet migrated to the historical views (Load views process). So, on this view you find the information that happens just a few seconds on your mainframe. Real-time monitors use this view to get the currently usage information. The data menu reports are listed below: Icon Report Description Cache Device Channel LCU channel paths Hyper-PAV Volumes View CACHE imported data records. Based on RMF 74.5 records View DEVICE imported data records. Based on RMF 74.1 records Channel activity imported data. Based on RMF 73 records LCU activity imported data. Based on RMF 78.3 records Hyper-PAV activity imported data. Based on RMF 78.3 records Volume occupation imported data. Based on DCOLLECT "V" records Physical On the Physical menu, you query the last information extracted from the mainframe for storage occupation and configuration. The physical menu reports are listed below: Icon Report Description Volumes Volumes history Duplicated volumes Occupancy history Storage groups List unique installation volumes View occupancy history by volumes List physical volumes that have the same volser on the installation View volumes occupancy history by date List SMS storage groups on the installation Logical On the Logical menu, you query the last information extracted from the mainframe for storage occupation and configuration for each LPAR. Dino Explorer Suite User's Guide v

25 The logical menu reports are listed below: Icon Report Description LPar volumes Volumes Occupancy history View volumes information per partition List all volumes information View volumes occupancy history per LPar Views In the Views menu, you can schematically view the configuration of LPAR volumes by LCU, Channel Path and LCU Matrix. The views menu reports are listed below: Icon Report Description LPar explorer Channel Explorer LCU volumes matrix Hierarchical view per partition Hierarchical view per channel paths View volumes occupancy history per LPar Storage Array The Storage Array menu reports are listed below: Icon Report Description Physical View Logical View Cache Activity Front-end Activity Channel Activity Configuration Volumes Configuration Space Allocation Configuration explorer: Storage array -> SSID -> volume Configuration explorer: Storage array -> Storage group -> volume CACHE summary history: IOPS, IO time, MB/s, IO type, zhpf, cache hits DEVICE summary history: IOPS, IO times (connect, disconnect, pending) Channel activity history: MB/s, read MB/s, writes MB/s, lpar units, unit size Configuration history: Storage arrays, SSID's, SID, LCU Configuration history: Storage arrays, volumes, devices, capacity Volume occupation history: allocated, used and free space, %fragments, largest extent Dino Explorer Suite User's Guide v

26 LPAR The LPAR menu reports are listed below: Icon Report Description Physical View Configuration explorer: SID -> CSS Id -> CHPID -> Storage array -> SSID -> volume Logical View Configuration explorer: SID -> Storage group -> volume Front-end Activity Channel Activity Configuration Volumes Configuration Space Allocation DEVICE summary history: IOPS, IO times (connect, disconnect, pending) Channel activity history: MB/s, read MB/s, writes MB/s, lpar units, unit size Configuration history: SID, LCU, SSID's, Storage arrays Configuration history: SID, volumes, devices, capacity Volume occupation history: allocated, used and free space, %fragments, largest extent CEC The CEC (Central Equipment Complex) or CPC (Central Processing Complex) menu reports usage by a whole mainframe computer are listed below: Icon Report Description Physical View Configuration explorer: CEC -> CSS Id -> CHPID -> Storage array -> SSID -> volume Logical View Configuration explorer: CEC -> SID -> Storage group -> volume Cache Activity Channel Activity Configuration CACHE summary history: IOPS, IO time, MB/s, IO type, zhpf, cache hits Channel activity history: MB/s, read MB/s, writes MB/s, Cpc and zhpf activity Configuration history: CEC name, CHPID, Storage array, SSID Dino Explorer Suite User's Guide v

27 Storage Group The Storage Group menu reports are listed below: Icon Report Description Physical View Logical View Front-end Activity Cache Activity Volumes Configuration Space Allocation Configuration explorer: Storage group -> Storage array -> volume Configuration explorer: Storage group -> volume DEVICE summary history: IOPS, IO times (connect, disconnect, pending) CACHE summary history: IOPS, IO time, MB/s, IO type, zhpf, cache hits Configuration history: Storage group, volumes, devices, capacity Volume occupation history: allocated, used and free space, %fragments, largest extent Volume The Volume menu reports are listed below: Icon Report Description Cache Activity Front-end Activity Volumes Configuration Space Allocation CACHE volume history: IOPS, IO time, MB/s, IO type, zhpf, cache hits DEVICE volume history: IOPS, IO times (connect, disconnect, pending) Configuration history: volser, capacity Volume occupation history: allocated, used and free space, %fragments, largest extent Dino Explorer Suite User's Guide v

28 Dataset Explorer The Dataset Explorer is an analytic toll that allows users to track and analyze the usage pattern and trends of IBM mainframe computer files, known as Dataset, in an effective and straightforward way. There are several relevant tasks the users can perform with this powerful tool: Track Dataset workload over a given time horizon, based on historical records; Verify resource usage for different file access methods based on Jobs, Programs and Logical Partitions; Identify Jobs or Programs that consume the most resources, such as I/O for Execute Channel Programs (EXCP); The Dataset Explorer main window is shown below: Dino Explorer Suite User's Guide v

29 The DataSet Explorer data is derived from the SMF records described below and from DCOLLECT "D" records. SMF Type (Dec) SMF Type (Hex) SMF Sub type Description 14 0E Input Data Set Activity 15 0F Output Data Set Activity 42 2A DFSMS Statistics and Configuration 06 Data Set I/O Statistics 61 3D ICF Catalog Define Activity VSAM Component or Cluster Status ICF Catalog Delete Activity ICF Catalog Alter Activity The Dataset Explorer product has the following menu items: Icon Title Description Data Query the raw imported records. Non-historical data VSAM Historical information from VSAM dataset usage. Based on SMF 64 records NVSAM Historical information from non-vsam dataset usage. Based on SMF 14/15 records Tape Historical information from tape non-vsam dataset usage. Based on SMF 14/15 records Allocation Non-VSAM DASD dataset inventory (tracks, extents and volser). Based on SMF 14/15 records Inventory DASD dataset inventory (VSAM and non-vsam). Based on DCOLLECT data (DXCOLET job) Performance Historical information from DASD datasets (VSAM and non-vsam) performance: response times, I/O type. Based on SMF 42.6 data Catalog Catalog operations on datasets: CATALOG, DELETE, RENAME. Based on SMF 61/65/66 records Each menu item has a set of submenu items. Each submenu item represents a specific query to be submitted to the Dino database. The query will be referenced now as a report. Dino Explorer Suite User's Guide v

30 Data menu On the Data menu, you find the information that have been downloaded from the mainframe and not yet migrated to the historical views (Load views process). So, on this view you find the information that happens just a few seconds on your mainframe. Real-time monitors use this view to get the currently usage information. The data menu reports are listed below: Icon Report Description VSAM View VSAM imported data records. Based on SMF 64 records NVSAM View non-vsam imported data records. Based on SMF 14/15 records VSAM unique View last "Load View" work table with single record per VSAM dataset usage. Based on SMF 64 records NVSAM unique View last "Load View" work table with single record per non-vsam dataset usage. Based on SMF 14/15 records SMS View SMS dataset (VSAM and non-vsam) activity imported data. Based on SMF 42.6 records Catalog View dataset catalog operations: CATALOG, DELETE and RENAME. Based on SMF 61/65/66 records Inventory View dataset (VSAM and non-vsam) DASD occupation imported records. Based on DCOLLECT "D" records Dino Explorer Suite User's Guide v

31 VSAM menu On the VSAM menu you find the information about VSAM datasets usage that is recorded on the SMF 64 record when a VSAM dataset is closed informing how much operation on the dataset has been done. The kind of information you will find on this view: VSAM usage history Which jobs access certain datasets What datasets the job accessed? Mostly used VSAM Datasets doing a lot of splits The VSAM menu reports are listed below: Icon Report Description DSNames LPAR Jobs HLQ View VSAM usage history per dsname. View VSAM usage history per SID. View VSAM usage history per jobname. View VSAM usage history per High Level Qualifier (HLQ) i.e. the first part of the dsname. You can group the counter on any of the following fields or part of it (substring fields): Dsname SID Jobname Username Grouping fields Cluster name Component type Situation indicator The counters available: Counters fields Use count (opens) Deletes EXCP's Physical IO's Level increments Inserts CI splits CF local Extent increments Updates CA splits CF xcf Hiperbatch Retrieves CF dasd Dino Explorer Suite User's Guide v

32 NVSAM menu On the NVSAM menu you find the information about non-vsam datasets usage that is recorded on the SMF 14 when the dataset is closed for input (reading) and SMF 15 when it is closed for output (writing) and kept on the NVSAM History target view. The kind of information you will find on this view: Dataset usage history SMS storage class usage Which jobs access certain datasets What datasets the job accessed? The NVSAM menu reports are listed below: Icon Report Description DSNames LPAR Jobs Programs HLQ View non-vsam usage history per dsname. View non-vsam usage history per SID. View non-vsam usage history per jobname. View non-vsam usage history per program name. View VSAM usage history per High Level Qualifier (HLQ) i.e. the first part of the dsname. You can group the counter on any of the following fields or part of it (substring fields): Group fields Dsname Step name SMS management class SID DSORG SMS data class Jobname Volumes SMS storage class Username UCB type Flag filters can be used to refine your queries: Flag filters fields Tape GDG Input New Dasd PDS member Output Old Temporary PDSE Delete Mod VIO Cataloged Rlse Shr Hiper Isam The counters available: Counters Use count (opens) EXCP's Max volumes Dino Explorer Suite User's Guide v

33 Tape menu On the Tape menu, you find the information about non-vsam datasets usage that is recorded on the SMF 14 when the dataset is closed for input (reading) and SMF 15 when it is closed for output (writing). The Tape menu reports are listed below: Icon Report Description DSNames LPAR Jobs Programs Tape usage View non-vsam usage history per dsname. View non-vsam usage history per SID. View non-vsam usage history per jobname. View non-vsam usage history per program name. View tape dataset history and their specific location in tape: volser, label, format All the reports are the same presented on the NVSAM menu using a single filter "istape = true" except for Tape usage: You can group the counter on any of the following fields or part of it (substring fields): Group fields Dsname Rec format SMS management class SID Lrecl SMS data class Creation date Block size SMS storage class Expiration date Seq count Vol seq Seq number Volumes Flag filters can be used to refine your queries: Flag filters fields GDG Input New Temporary Output Old Cataloged Delete Mod Cataloged Rlse Shr The counters available: Counters Use count (opens) EXCP's Max volumes Block count Dino Explorer Suite User's Guide v

34 Allocation menu On the Allocation menu, you find the information about the size and format of non-vsam datasets on disk. The data is collected from the SMF 14 and 15 records and kept on the Dataset Sizes target view. The kind of information you will find on this view: Dataset size in tracks and GBytes Location of datasets Inventory by creation date The Allocation menu reports are listed below: Icon Report Description Dataset sizes Creation date Volume List dataset sizes. List total sizes per creation date List total allocate space per volume You can group the counter on any of the following fields or part of it (substring fields): Group fields Dsname DSORG SMS management class Creation date Rec Format SMS data class Expiration date Space type SMS storage class PDS member Volser Volumes Flag filters can be used to refine your queries: Flag filters fields Temporary GDG Input New VIO PDS member Output Old Hiper PDSE Delete Mod Isam Cataloged Rlse Shr The counters available: Counters Use count (opens) Extents Tracks Size (GB) Dino Explorer Suite User's Guide v

35 Inventory menu On the Inventory menu, you find the information about the size and format of VSAM and non- VSAM datasets on disk. The data is collected from the DCOLLECT "D" format records and kept in the Inventory History target view. The kind of information you will find on this view: Dataset size in tracks and GBytes Location of datasets Storage groups The Allocation menu reports are listed below: Icon Report Description DSNames Volume HLQ List dataset sizes. List total allocate space per volume List total allocated per high-level qualifier (HLQ) You can group the counter on any of the following fields or part of it (substring fields): Group fields Dsname Creation date Block size SMS management class Volser Expiration date DSORG SMS data class Vol seq Backup date Extents SMS storage class 1st volser Ref. date Rec Format Storage Group Job name Step name Stripes Flag filters can be used to refine your queries: Cataloged Racf Reblock Stripe Flag filters fields GDG HFS PDSE SMS The counters available: Extents Alloc (GB) Alloc 2nd (GB) Used Gb Counters Alloc over (GB) Total size (GB) Comp size (GB) Dino Explorer Suite User's Guide v

36 Performance menu On the Performance menu, you find the information about the response time and the kind of IO's that the jobs does during the intervals. The data is collected from the SMF 42.6 records and kept on the SMS History target view. The kind of information you will find on this view: All the jobs that access the same datasets Response times per extents IO patterns: sequential vs random, cache hits DB2 component response times: table and index spaces The Performance menu reports are listed below: Icon Report Description Datasets Volume List dataset response times List total response times per volume You can group the counter on any of the following fields or part of it (substring fields): Group fields Dsname SID WLM Workload Volser Job name WLM Class Device number User name SMS storage class Flag filters can be used to refine your queries: Open Close Interval PS VSAM Flag filters fields PDS PDSE DA HFS The counters available: Use count Caches RLS IO Write hits IO's Seq IO Inhibit IO Seq read blk Tot reads Writes Cache hits Rand read blk Dir reads Dir writes Max service Rand write blk Resp Time Conn time Pend time Disc time Queue time Actv time Read disc Max RT Dino Explorer Suite User's Guide v

37 Catalog menu On the Catalog menu, you find the information about dataset catalog operations. The data is collected from the SMF 61, 65 and 66 records and use the Catalog History target view. The kind of information you will find on this view: Who deleted the dataset? What's happened with my dataset? The Catalog menu reports are listed below: Icon Report Description DSNames LPAR Jobs Catalog Rename HLQ Operations List total summary of catalog operations per dataset name List total summary of catalog operations per SID List total summary of catalog operations per job name List total summary of catalog operations catalog name List the rename operations: old dsname, new dsname, jobname, SID and time List total summary of catalog operations per alias or high-level qualifier (HLQ) List catalog operation records: Catalog, Scratch, Rename, Alter DB2 Explorer Dino Explorer Suite User's Guide v

38 DB2 Explorer uses the information collected in the SMF records type 101 and subtype 0 (DB2 accounting) for DB2 events. It keeps track of your DB2 accesses from CICS, IMS transactions and Open platform access (DRDA). There are several counters come in real time and without resource consumption allowing view information such as: Response times and CPU times; Wait times: I/O, Locks, store procedures... Buffer pool activity SQL statements: SELECT, INSERT, DELETE Number of rows DB2 applications resource consumption DB2 product consumption And many other counters, all this in real-time, doing no I/O, with almost zero CPU... The DB2 Explorer main window is shown below: Dino Explorer Suite User's Guide v

39 The DB2 Explorer data is derived from the SMF records described below. SMF Type (Dec) SMF Type (Hex) SMF Sub type DB2 Statistics Description Data menu On the Data menu, you find the information that have been downloaded from the mainframe and not yet migrated to the historical views (Load views process). So, on this view you find the information that happens just a few seconds on your mainframe. Real-time monitors use this view to get the currently usage information. The data menu reports are listed below: Icon Report Description Summary from source Transaction Summary Users Summary Summary by remote computers Job Summary List Accounting Records DB2 Performance Report List from sources (CICS, DRDA, ) List summary per transactions List summary per username (auth ID) List summary per computer name List summary per transactions List accounting records DB2 Performance counters Dino Explorer Suite User's Guide v

40 DB2 instances On the DB2 instances menu you find the information from all DB2 instances populated in historical table. The data menu reports are listed below: Icon Report Description Summary from source DB2 Performance Report List from sources (CICS, DRDA, ) DB2 Performance counters CICS On the CICS menu you find the information from all CICS transactions that access DB2 environment populated in historical table. The CICS menu reports are listed below: Icon Report Description CICS Transactions Summary CICS Region Summary CICS Transaction Performance Report CICS Region Performance Report List summary per transactions List CICS region summary CICS Transaction performance counters CICS Region performance counters Distributed On the Distributed menu, you find the information from all transactions originated in distributed environment that access DB2 environment populated in historical table. The distributed menu reports are listed below: Icon Report Description Summary from remote computers Remote Access Performance Summary List DB2 Activity from remote computers List remote aces performance counters per computer name Dino Explorer Suite User's Guide v

41 IMS On the IMS menu you find the information from all IMS transactions that access DB2 environment populated in historical table. The IMS menu reports are listed below: Icon Report Description IMS Summary IMS PSBNAME Summary IMS Performance Report IMS PSBNAME Performance Report List IMS activity summary List IMS Psbname activity summary IMS performance counters IMS Psbname performance counters Users On the users menu, you find the information from all users that access DB2 environment populated in historical table. The users menu reports are listed below: Icon Report Description User usage summary List activity by username Dino Explorer Suite User's Guide v

42 IO Explorer The IO Explorer exploits the I/O performed by the applications on the mainframe, i.e. when a step of a job terminates, the z/os writes on the SMF through the record type 30 subtype 4 (step-end) how much I/O the execution program done in each of its files. DDNAME An application program normally accesses many datasets. However, the programs refer to the datasets through a DDNAME (Data Definition Name) which consist of a name up to 8 characters. The jobs JCL maps the DDNAME to real datasets. Follows a single example of a DXPLSMF job (SMF batch extractor): On the above example, we have the following DDNAME s: DDNAME STEPLIB CICSDIC SMFIN CSVOUT SYSPRINT DXPLIN Description Load library where the program is located (DXPLSMF) Dataset with the CICS regions dictionaries. The dictionary informs the layout of the SMF 110 performance records. SMF dump file with the records to be processed by the DXPLSMF program Output file with the records in CSV format to be loaded on the DinoDB. Execution report summarizing the processing records: options selected and total counters per record type written on the CSVOUT dataset. Control statements informing which records are to be collected and some date and time intervals. The following picture shows the number of I/O operations on each DDNAME. Note that SYSPRINT and DXPLIN does not appear on the list because they are not real datasets. They exist only on the spool of JES (job entry sub-system), the scheduler of jobs on the z/os system: Dino Explorer Suite User's Guide v

43 Counter fields The IO Explorer has very few counters, explained bellow: Field Use count Block count Block size Connected time Description Number of dataset allocations. Sometimes you may open a dataset many times on a single program execution. Number of blocks transferred to the dataset. Maximum block size used. Total time used to transfer the data to or from the dataset. Group fields The group fields permit you have different views of your historical data such as: Which jobs access a certain device number; Where are located, the datasets used by a certain job; The distribution of I/O in your tape libraries You can create any combination with the following fields: SID Sysplex name System name RACF username RACF group Job name Step name Program name Execution type WLM Class WLM Name WLM group WLM Report Device number Unit type Device class DD name The following table describe some key fields on the IO Explorer: Field Device number Description Device address of the unit (DASD or TAPE). In z/os each device has an address from 0x0000 to 0xFFFF. You can check on DASD Explorer the device address of each volser (label of a disk) on the system. Device class Type of device: 20 DASD, 80 - Tape Unit type 0F 3390, , , DD name Data definition name, I.e. the name of a DD definition on the JCL. Connects the name on the DCB (data control block) inside the program with the definition on the JCL. Dino Explorer Suite User's Guide v

44 The IO Explorer main window is shown below: The IO Explorer data is derived from the SMF records described below. SMF Type (Dec) SMF Type (Hex) SMF Sub type Description 30 1E Common Address Space Work 4 Step Termination Dino Explorer Suite User's Guide v

45 Data menu The data menu reports are listed below: Icon Report Description List details Summary per partition Total summary List imported data records. Query records exactly as they were imported from CSV file. View imported data summary per partition. Summarizes resource usage counters grouped by sysplex, system name, SID (LPAR) View imported data total summary. Summarizes resource usage counters for all imported records Common history queries Independently of which menu you select in IO Explorer, you always get data from the same place (target or database table). The only difference is the grouping used on the query: Icon Menu Grouping field Description Jobs Job name Resources used by job name LPAR SID Resources used by SID Program Program name Resources used by program name Users User Resources used by user The following table show the common queries: Icon Report Description List details Summary per partition Total summary List IO history records. Each record may represent resources used by the execution that occurs on the same interval. Summarized the historical data and group by SID, so you get one result line for each LPAR. In this query, whatever grouping you selected, you will still get grouped by SID. Summarized all records into a single result line per interval and grouping. Dino Explorer Suite User's Guide v

46 MSU Explorer MSU Explorer uses the information collected from SMF records type 225 generate from zcost product (Data Menu) and SCRT reports generate from IBM WLC charge product (Cost Menu). The MSU Explorer main window is shown below: Dino Explorer Suite User's Guide v

47 The MSU Explorer data is derived from the SMF records described below and from SCRT Report spreadsheet imported to ASCxxxCost tables. Note: For more details from SCRT see manual z/os Planning for Sub-Capacity Pricing SMF Type SMF Type SMF Description (Dec) (Hex) Sub type 225 E1 zcost AutoSoftCapping Data menu On the Data menu, you find the information extracted from SMF 225 records, generate from ZCost product and imported to CPC, LPAR and WLM tables. The data menu reports are listed below: Icon Report Description CPC imported data CPC total LPAR imported data LPAR total WLM classes imported data WLM classes total List CPC imported data (asccpcdata) List all CPC imported data List LPAR imported data (asccpcdata) All LPAR imported data List WLM imported data (ascwlmdata) All WLM imported data Dino Explorer Suite User's Guide v

48 Cost On the Cost menu, you find the information that have been imported from SCRT Report and populated CPC and LPAR cost tables. The cost menu reports are listed below: Icon Report Description Total Cost Cost per Product Cost records LPAR Total Cost LPAR Cost per Product LPAR cost records CPC Used vs Charged List total cost List cost per product List detail cost records per product List total cost per LPAR List cost per product per LPAR List detail cost records per product per LPAR List CPC used and charged Dino Explorer Suite User's Guide v

49 Dino Smart The Purpose of Dino Smart product is to provide tools for management and monitoring mainframe batch process. Smart is a container of three important tools: AIM (Application Impact Monitoring), job Executions and Job Chain. With the AIM tool, you will be able to monitor several business environments. These entities are disposed by AIM like a hierarchical tree view. The service level agreement is persisted in a historical database and you can analyze this information using the Job Execution tool. This tool displays detailed information about mainframe batch process executed. Let s Job Chain tool easily discover the reverse engineering of jobs executions for you, and watch this chain in a graphical dependency diagram. Job Chain is a powerful tool that makes parallel queries against historical database and builds the job chain dependency diagram based on the input and output datasets used by mainframe jobs. The Dino Smart main window is shown below: Dino Explorer Suite User's Guide v

50 Monitoring Mainframe supports thousands of application programs concurrently. This scenario demands effective technology to help monitoring and organizing mission-critical services to its employees, partners, and customers. To deliver services at the appropriated level and meet business requirements, availability must be considered. A well-designed business oriented execution view of such application programs, can resolve monitoring challenges faced by IT professionals. At a high level, the requirements for one monitoring solution architecture can be summarized as follows: Business need that systems provide services in an efficiently and reliably manner. In addition, business related to the services also needs to meet organization and customer agreement levels. The monitoring menu reports are listed below: Icon Report Description Target AIM Online and batch job monitoring User input Create user input tasks AIM User Input Force task status Manually set the status for a job or job step Exec Imported Data AIM Application Impact Monitoring The first item of monitoring menu is the AIM. It is an application impact monitoring tool that provides ways to watch and check mainframe execution environment. Thus, giving to the enterprise ability to examine status of online and batch systems in real-time. AIM solution architecture includes: Effective way to organize business application programs in business hierarchical views; Meeting execution at a customer service level agreement; Monitoring based on activities schedule programming; Usage details for programs execution and system resources. Dino Explorer Suite User's Guide v

51 Architecture Definition AIM architecture combines technology and elements from IBM's mainframe, network lightweight protocol and MS Windows TM platform into a real-time monitoring solution for mainframe application programs. An illustration of this model is provided in the following figure. Figure 1. Simplified Solution Architecture DINO server establishes connection with IBM s mainframe and start receiving monitoring messages. AIM desktop application runs on a client workstation and uses the server as a repository for business hierarchical structure and examines systems execution status. The sections in this guide describe AIM desktop application features, thus providing necessary knowledge towards its use. AIM overview This guide discusses AIM features and addresses the process of planning and designing a monitoring environment for the mainframe application programs in the enterprise organization. AIM design is focused on a flexible way to organize and relate business to its application programs. It defines few entities to achieve this goal. The most important repository entity is named Activity, mainly used as a container for programs. To an Activity user can add tasks that normally are Jobs or Job steps, relating them as part of a business process. Dino Explorer Suite User's Guide v

52 Figure 2 shows another defined entity named Task. An Activity holds a collection of Tasks which can be one of the following types: Job: Mainframe separately executable unit of work. Job step: Mainframe program identified by a Job. User input: Normally a task on the open platform informed through the DinoTask command line utility. Activity: Another AIM Activity can be included as a Task. Activities and Tasks have many other properties that define and give them the ability to be the target of a monitoring process. Those entities will be detailed later in this guide. This overview introduces the core concept for AIM, that is, Organize and Monitoring. It means that users can select and group mainframe application programs in multiple Activities and organize them within a Hierarchical tree view. By navigating the tree view users can select and monitor Activities with the Execution window Hierarchical Tree View With AIM tree users, can create and explore a hierarchical view of Activities giving it a desired level of organization. By adding folders and selecting activities into them, users can configure a Business oriented execution view. Figure 2.1. A tree view sample Figure 2.1 is an example which shows nested Folders and Activities within a tree view. Dino Explorer Suite User's Guide v

53 Overview AIM Design Interface Clicking on AIM option menu, the following window will be shown: On the left side, it s shown the Root Menu with all the folders inside it. Clicking on each folder of the left side, it contents will be shown on the right side. In the example, above, I ve clicked on the Root folder and it shows me the same folders on the right side. I will give another example clicking on the Demo folder: You can also expand the folder on the left side clicking on the symbol. It will be shown like this: Inside a folder we have the activities and can be another s folders with another s activities inside. Like the example above. Dino Explorer Suite User's Guide v

54 If you click on an activity on the left side, all the tasks of this activity will be shown on the right. The tasks can be a Job, Job Step, User input or even another activity with other tasks inside. Note: Be sure to keep your tree structure of activities, folders, time window and filters updated, excluding the items no longer used in your monitoring making your life easier. Creating a Folder Folders are containers of activities that are normally monitored together. And you can also apply filters and Time window to all activities in a folder. Now, let s see how to create a folder with activities and tasks. First, you must right-click on the Root folder and select the New > Folder options: After this, you give a name to the folder. In this case, I gave the name Users Guide Folder. Creating an Activity To create a new activity inside this folder, you may to right-click on it and select New > Activity option: Dino Explorer Suite User's Guide v

55 The Activity Detail Window will be shown: In the activity window, the user fills some fields such as: Task list - All tasks that will be part of this activity Start, end time and duration: expected running interval to calculate SLA s (red fields) SLA specifications for the activity Header name - the name that will appear on the monitor Periodicity associated with the activity Calendar associated with the activity To create the task list the user has two options: Click on find tasks (item 1 in the figure above) or Fill the fields in Tasks item Find tasks The user click on Find tasks and the following window will be shown: In the filter area, you use several fields to compose the query such as: Time period (start and end time) Task type queried (job, jobstep, activity, userinput or syslog) For job, jobstep or activity in the field jobname you enter the name of the searched task For userinput in the field task name you enter the name of the userinput task to be searched For syslog in the field message you enter the string searched between percentage character The user can also use the user filter field to create a complex filter After fill your options just click the query button to execute the query, the Dino Smart will bring all information for that task. For task type (job or jobstep) the query return details from the history, where you get the average values to the task such as: start time, executions, EXCP (I/O), Tot Service Units, CPU time and duration. Than you right-click (or double click) on the grade and select Add to activity tasks, and if you don t want to select another task, you can close this window clicking on the symbol. Done this, the task information will appear in the Activity tasks field on the Activity Detail Window. Dino Explorer Suite User's Guide v

56 Tasks type options: Type Job Type JobStep Type Activity Type UserInput Type Syslog with User Filter (the filter must be associated with the task after it has been added) Dino Explorer Suite User's Guide v

57 If you want to delete a task from an activity just select the task and hit the Delete key or right-click and choose Delete task. After defining all the tasks that make up the activity, the user can change the order of the list using the buttons or. The user can also define the task relevance within the activity. Task Relevance Selecting a task relevance implies in which way a task status can affect its owner activity status, during an execution monitoring. By default, all the tasks are set as required when added. Critical A Task with critical relevance can signal its owner Activity to fatal status, if it is either not running or has a completion code different than normal; Required A Task with required relevance can signal its owner Activity to error status, if it is either not running or has a completion code different than normal. Optional A Task with optional relevance does not change its owner Activity status. Others Task modifications The user can adjust the average values used for the task. To do this, just right-click on the task and choose Open task detail. In this window the user can change any information about the task already registered. The average values obtained by the query are in the Schedule time and Task specifics frames. Dino Explorer Suite User's Guide v

58 Associate filter with the tasks Select Organize to add or remove filters associated with the task Add filter select filter in available filters list click Remove filter select filter in select filters list button and Ok button click button and Ok button Note: For each task that uses filter in the query, the user needs to associate it with the task. Dino Explorer Suite User's Guide v

59 Activity Detail Window In Activity Detail Window if you click on the symbol, it will automatic fill the fields with the average Start time and End time and give you de duration time. But remember, the user can set yourself the Start and End time the way you see fit. Note: Start and End time is used to track the SLA associated with the activity, because of this it is recommended that you inform Start and End time values according to the business need. Periodicity Now, let s select the periodicity of the activity. By default, when you open the Activity Detail Window, the periodicity will be set as Daily, as you can see on the right top. You can set if the activity will be Eventual, Online, Daily, Weekly, Monthly or Annual. For each chosen periodicity, the window will have some variations in the start and end time fields. We put some examples below: Dino Explorer Suite User's Guide v

60 Eventual Periodicity In the Start and End time fields (SLA fields), we will indicate the time of the start and end of the activity. Daily Periodicity In the Start and End time fields (SLA fields), we will indicate the time of the start and end of the activity. Weekly Periodicity In the Start and End time fields (SLA fields), we will indicate the day week and time of the start and end of the activity. Monthly Periodicity In the Start and End time fields (SLA fields), we will indicate the day and time of the start and end of the activity. Annual Periodicity In the Start and End time fields (SLA fields), we will indicate the day, month and time of the start and end of the activity. Dino Explorer Suite User's Guide v

61 Calendar filter You can use Calendar filter to specify the days that your activity is supposed to run. Such as a daily activity that do not run on Sundays. We only use the Calendar filter on specific cases. By default, when you open the Activity Detail Window, the Calendar filter will be set as <Not Selected>. So, to create a Calendar Filter, select <New Calendar>, and right after this, click on the button. When you click on Edit button, the Calendar Filter Editor will be shown: In this window, you can create filters in SQL queries, but in a friendly interface. As you can see, it has many buttons that you can use to build your filter. Dino Explorer Suite User's Guide v

62 In our example, the activity does not run at Sundays, so all you should do is click in only two buttons, first the NOT button and then, the Sun button. Note that the Filter Items you selected and the filter text will be shown in the blank spaces below the buttons. Done it, you can click in Save and give a name to the Calendar Filter. We named it Not Sunday. Click in OK and close the Calendar Filter Editor window. he Calendar filter we just created will be already selected on the Activity Details Window: Dino Explorer Suite User's Guide v

63 Activity Service Level Agreement (SLA) You can configure SLA Alarms to your activity based on the specified interval time informed on the activity, by clicking on link button. The following window will be shown: Suppose your activity duration time is set to 1 hour run and you want to be alarmed if your activity is expected to take longer than you have informed. For example, if the activity runs for more than 1 hour, it will alarm in the color Green until 20% of late. From 20% to 70%, if the activity is still late, it will alarm in the color Yellow. From 70% to 90%, it alarms Red. If pass the 90% of the time and it still late, the Alarm turns Black, that means Fatal. You can calibrate the percentage bars to your needs. You can also use less status, such as On-time and Late: You can also be alarmed, if your activity does not start to run on the expected time. On the right top, you can set the Not Started alarm. On this case, if the activity does not start up to 30 minutes after the scheduled time, it will also alarm. Dino Explorer Suite User's Guide v

64 Name and Header Name An activity can have two names: Name the physical name of the activity. If you use an activity as a task of another activity, you must inform this name; Header name it is the name that will appear on the monitoring window The user types the activity's header name, in this example we enter User Guide Activity Note: Document your Activities (1) and your Tasks (2) You can use the description fields to document your activities and corresponding tasks: for example operator instructions in case of failures (abends) Now just click on the "Save" button, in this case the name and header name are the same. If the user wants the name and header name different click on the "Save as" button and type the name After close the Activity Detail window, and you will see the folder with the activity on the AIM window: Dino Explorer Suite User's Guide v

65 Parameter and Parameter Template Like filter, parameter is another entity that can be targeted to folder and task. You should apply parameter when the reuse of an activity becomes a necessity. Task name property is a key for execution queries, with parameter one can inserts symbols into task name that will be substituted for values at a query time. Steps for using parameters are: Identify a candidate activity: Check if an activity has tasks that share name patterns that fit into a symbol / value solution. Modify task name: A symbol can have any name and must be enclose with percentage character. This construction should be inserted in a task name at the desired position for substitution. Create a parameter: Symbol / value pair defines the parameter entity that will be applied at folder or task scope level. With Parameter template you can define a collection of symbols for reuse within parameter association in each folder you need. Notes: There is only one parameter associated with each folder activity. Time window The time window is used to establish the time interval that the activity must be performed based on the periodicity defined for the activity. You can associate a time window for each activity, but we suggest creating the appropriate amount of time windows to suit your needs. We have already provided a time window with the name of <Default> containing all the periodicities shown in the figure below Dino Explorer Suite User's Guide v

66 Notes: There is only one time window associated with each folder activity. If no time window is directly associated with the activity, the <Default> time window will be used. If you have a sub-folder, this sub-folder will inherit the Time window from the parent folder unless it has also a Time window on it. The default value for Time zone field is that used by Windows, user can set another Time zone. Dino Explorer Suite User's Guide v

67 Creating or changing a time window The user has two options: Options / Time Window Manager / Double click on the time window <Default> or another time window In the activity folder select New / Time Window The Time window define your batch windows periods used to monitor your activities, i.e. your daily window may start at 17pm and end at the next day at 9 am. A folder is composed of many activities that supposed to run in a same time period, so the Time window is used to set this period. To do this, you must right-click on the folder and select New > Time Window. The following window will be shown: Periodicity activities With the Time window, you control the periodicities you have configured on the activities. Note that it has six tabs, the Eventual tab, the Online tab, the Daily tab, the Weekly tab, the Monthly tab and the Annual tab. Dino Explorer Suite User's Guide v

68 Eventual activities On the Eventual tab, you set the period between the activities executions. This configuration will only work to activities in the folder that where configured with Eventual periodicity. Assume that we have an activity that runs every two and a half hours, so we must set a period of two hours for each execution. Like the image below: This means that every two hours, the cache is cleaned to wait for the next execution of this activity. Online activities In this tab, we set the response time for the online activities. For example, the records take 15 minutes and you set 20 minutes on the Online tab, if there s no records for this activity in 20 minutes, the activity isn t working. Again, this configuration only works to activities that was configured as Online periodicity. Dino Explorer Suite User's Guide v

69 Daily activities The Daily tab, that s the one we must use for our example. Every client has a specific hour for the beginning of his activities. Let s assume that our activities folder it s filled with all activities that need to execute between 10:00 PM and 5:00 PM of the next day. On the Daily tab, we have a time line divided in DAY / DAY + 1: What we must do to configure the time line is move the bars. The first one to 10:00 PM (22h) Day and the second to 5:00 PM (17h) Day + 1. The time line will be like this: Note: You can go back in time using the time window, deselecting the Now checkbox and setting the day and hour you wish to see, on the Reference date field. Dino Explorer Suite User's Guide v

70 Weekly activities Now, let s see the three last tabs. The next one is the Weekly tab. In this tab, we must set the week day for the Weekly activities. If your Weekly activities begins every Wednesday and finish at Monday of the next week, for example, you can set the bars like this: Monthly and Annual activities The Monthly and Annual tabs are basically the same concept. Finalizing Time Window So, let s go back to our example (daily activity) and continue our activity monitoring. After you set the right time line for the activity, you can name this time window in Header field and click in Save. I will give it the User s Guide Time Window name. After Save you can close this window. Dino Explorer Suite User's Guide v

71 You will see that in the folder, there will be a Time Window near the activity we made. Before we test the activity, we must do a little change on the Activity Details Window, because now we have a time window defined, starting in one day and finishing in the next day. How the activity starts between 3h PM and 4h PM and the time window begins at 10h PM. We must set an information that makes the system understand that the activity must be monitored in Day + 1. What we do is write 01. before the start and end time. If we don t do this, the activity will alarm as Delayed when the Time window starts at 10h PM and it will be like this until the activity starts the next day. After do this, you save and close the window. Now we can test the Activity. Dino Explorer Suite User's Guide v

72 Organize option The organize option allows the user to make changes to the entities that are part of any folder. Activities Time window Filters Add or remove activities for the current folder. Select one available time window for the current folder. Add or remove filters for the current folder. Activities When you select the activities option in a folder, a screen with all activities associated with the folder on the right side and the left side of the other activities will be displayed. You can add new activities or remove activities from the folder by selecting one or more activities and then using the or buttons and then click the Ok button. Time window When you select the time window option in a folder, a screen with the current time window associated with the folder will be displayed. To change the time window associated with a folder, select another time window from the list and click on the Ok button. Note: Remember that if a time window is not created for a folder the time window <default> will be associated. Dino Explorer Suite User's Guide v

73 Filters When you select the filters option in a folder, a screen with all filters associated with the folder on the right side and the left side of the other filters will be displayed. You can add new filters or remove filters from the folder by selecting one or more filters and then using the or buttons and then click the Ok button. AIM Options The AIM has some options that we will describe in the next pages. To use them just click option and choose the desired option. Set colors You can change the colors to every other color you wish. In the AIM - Application Monitoring Monitor window, you click in Options > Set Colors You can change the colors writing the name on the fields. For example, I will erase the name Blue and write Orange, in the Progress column: Dino Explorer Suite User's Guide v

74 The color changes according to the name you write. Filters The description of each type of filter is the same for all products. For a description of each type, see Common Features / Filters. Parameter Template With the parameter template the user can define the list of symbols that will be used later in the tasks associated with each activity, you click in Options > Parameter template (New) Enter the symbol relation and click save as option Dino Explorer Suite User's Guide v

75 Type the Parameter template name and click Ok button To delete a template just select it in the template list and click on the delete option. Time window manager This option displays the relationship of all time windows defined in AIM In addition, the user can: Create a new time window (Previously described) Check in which folder the time window is used ; To remove the association with folder use the option organize time window. Delete a time window if it is not associated with any folders ; Edit an existing time window double click (Previously described) Activity manager This option displays the list of activities defined in AIM In addition, the user can: Create a new activity (Previously described) Check in which folder the activity is used ; To remove the association with folder use the option organize activities. Delete an activity if it is not associated with any folders ; Edit an existing activity double click (Previously described) Dino Explorer Suite User's Guide v

76 Monitoring Activity You can monitor a single activity or all activities in a folder by pressing F5 or selecting Run over an activity or a folder. When we monitor an activity, we can view all the details or the summary of the activity and its tasks, by clicking on the activity or task and selecting switch view. Detail Information Summary information Well, now that you learned how to create an activity, I will open another one to show some examples of configurations you can do. Dino Explorer Suite User's Guide v

77 Monitoring Sample (Batch Daily Systems) After setting up your monitoring process as described in the previous pages, we will show below the operation of the monitoring process and the information provided in each of its windows. To start the process of monitoring a folder or an activity, simply press the F5 key or right-click and choose Run. If you choose to monitor a folder, the list of registered activities in this folder will be displayed. Summary and Detailed Views You can toggle the view mode between Summary and Detailed to the activity and task window by right-clicking and choosing Switch view. You can switch between views on the same way. Bellow we show the two views: Detailed view Summary view Dino Explorer Suite User's Guide v

78 Shortcuts You can save a customized window on a shortcut to get the same results clicking on the shortcut. The shortcut saves all your selections: windows location and size, filters applied and refresh options. Rightclick on the blank area in the window and click Create shortcut on desktop and give it a name: Shortcuts can be created for activities or tasks. Time Window and Refresh mode The time window and refresh mode options are interconnected. Do not forget that there is a single time window per folder and if you want to change some field other than reference date you should review all activities within the folder before starting any monitoring process. Now Option Date/Time Option When the user is monitoring in real time the value of the field reference date in the time window must be set in now and the refresh option must be in automatic (be sure to set a suitable time value for the refresh). If you click link button options as shown below: The user can choose the manual or automatic update mode; in automatic mode, the user sets the update time. On the example, after 1 minute, the windows will be refreshed i.e. the activities will be reevaluated. Dino Explorer Suite User's Guide v

79 You can speed-up the monitoring window using the Cache completed tasks facility. When this option is selected, all the job and step of a job tasks that completed with success are cached and they are not checked on the database for completion status. If you submit another job with the same name as a previous one on the same time window, you will see the details of the previous execution, unless you disable the cache and update the monitoring window. Run The run option button time. allows the user to re-evaluate the activities being monitored at any Monitor Folder If you chose to monitor a folder, the list of registered activities in this folder will be displayed; If you chose an activity, the list of tasks for that activity will be displayed. We will now describe the fields displayed in the folder window: Name Header name defined during activity registration. Start time Start time value of the activity if it has already started, or the time window. End time End time value of the activity if it is already finished or the estimated value for the end of the activity (see expected end time and duration below). Elapsed time Total time spent by the activity or the time spent so far. The Progress, SLA, and Schedule fields provide the activity status for each indicator. Dino Explorer Suite User's Guide v

80 Expected End Time and Duration The activities expected end time is the main metric being monitored. Based on this data, we can alert you if something is going to be late which gives time to prepare a solution to the problem or to determine the impact of the delay. The calculation of the expected end time is very straightforward and can be expressed as is shown below: Activity end-time = MAX(Tasks end-time) Task end-time = Task start-time + duration So, the software uses the actual start time and the expected duration of the tasks to calculate the expected end time. Fortunately, in many mainframe shops, the jobs are very regular and the job scheduler is programmed to repeat the same chain of jobs every day. We can also use the Dino databases historical information on the jobs as shown below: If we calculate the averages, we get the following estimated values: In this example, the task starts at 20:22 and takes 5:21 to execute. This works well in usual circumstances, however if needed, you can rely on other tasks to calculate the expected start and end times. Say we have a job which is triggered by the completion of another job, we can use the below computation to determine the job end time: Task end-time = Depend-Task end-time + duration We first calculate the initial job expected end time and then we can calculate the expected end time of the triggered job. Dino Explorer Suite User's Guide v

81 Legend Status - Progress, SLA and Schedule The legend status of these fields can be seen in the frame below We will show below some situations that may occur in your monitoring and what is displayed in the status of each field. To facilitate the understanding we will show the defined values of other indicators (start and end of activity and SLA) that influence the status of these fields. The SLA values used in the examples are: Bank Charges Delay at start of activity 10 minutes End of activity (0-20 Normal / Delayed / Alarm / >91 minutes Fatal) Bank Checking Delay at start of activity 20 minutes End of activity (0-20 Normal / Delayed / Alarm / >91 minutes Fatal) Credit Card Delay at start of activity 30 minutes End of activity (0-20 Normal / Delayed / Alarm / >91 minutes Fatal) Insurance and Loans Delay at start of activity 30 minutes End of activity (0-20 Normal / Delayed / Alarm / >91 minutes Fatal) As you can see the SLA parameters are customized per the user's need for each activity. The values used in the examples are fixed, only Start and End Time of activity will be changed. Dino Explorer Suite User's Guide v

82 Bank Charges Tasks Start/end time 18:00 22:00 Bank checking account Tasks Start/end time 23:50 05:00 Credit Card Tasks Start/end time 23:00 08:00 Insurance Tasks Start/end time 19:00 04:00 Loans Tasks Start/end time 23:00 07:00 Dino Explorer Suite User's Guide v

83 Example 1: Activity-start/end Bank Charges-18:00/22:00 Bank Checking-23:50/05:00 Date/Time reference 29/May/14 19:30 Bank Charges - Status Bank Charges Start time of the first task VICO :02:39; End time is the end time estimated for the task VIC0931; Progress Running; Schedule On time 00:02:39 minutes (normal because the delay 02:39 < 10 minutes). Bank checking Start time 19:30:00 (date/time reference); End time is time reference + duration for the task YMMD0080; Progress not started; Schedule On time -04:20:00 (normal because the activity is expected to start at 23:50) Dino Explorer Suite User's Guide v

84 Example 2: Activity-start/end Bank Charges-18:00/22:00 Bank Checking-19:00/05:00 Added Abend Task BIBC1704 in the activity Bank Charges (relevance Optional or Required) Date/Time reference 29/May/14 19:30 Bank Charges Status Bank Charges Start time the first task VICO :02:39; End time is the end time estimated for the task VIC0931; Progress Error (task BIBC1704 terminated with status Fatal due to the abend). Schedule On time 00:02:39 minutes (normal because the delay 02:39 < 10 minutes). Bank checking Start time 19:30:00 (date/time reference); End time is time reference + duration for the task YMMD0080; Progress not started; Schedule Delayed (because the activity is expected to start at 19:00, after the start of the activity the delay information will be kept in the schedule field; however, the delay event will end and the color will change to gray. Example 3: Activity-start/end Bank Charges-18:00/22:00 Bank Checking-19:00/05:00 Credit Card-19:00/04:00 Added Abend Task BIBC1704 in the activity Bank Charges (relevance Critical) Date/Time reference 29/May/14 19:30 The difference between example 2 and 3 is that the relevance of the task BIBC1704 was changed from required to critical, so the progress of the activity went from error to fatal. Dino Explorer Suite User's Guide v

85 Example 4: Activity-start/end Bank Charges-18:00/22:00 Bank Checking-23:50/05:00 Credit Card-23:00/04:00 Insurance-19:00/04:00 Loans-23:00/07:00 Date/Time reference 03/Jun/14 07:00 The activities Bank Charges, Bank Checking and Insurance were completed successfully and within the specified SLA. Credit Card Start time of the first task YML :02:27; End time is the end time estimated for the task YMLO0963; Progress Running; SLA Late 25:01 (the estimated end time is greater than the time defined for the activity, but within the delay time to the end of the activity); Schedule Delayed to start 01:02:27 minutes. Loans Start time of the first task YMACA001 03:17:40; End time is the end time estimated for the task YMAC0005; Progress Running; SLA On time 03:02 (the estimated end time is greater than the time defined for the activity, but within the tolerance time to the end of the activity); Schedule Delayed to start 04:17:40 minutes. Dino Explorer Suite User's Guide v

86 Example 5: Activity-start/end Bank Charges-18:00/22:00 Bank Checking-23:50/05:00 Credit Card-23:00/04:00 Insurance-19:00/04:00 Loans-23:00/07:00 Date/Time reference 03/Jun/14 08:40 The activities Bank Charges, Bank Checking, Insurance and Loans were completed successfully and within the specified SLA. Credit Card Start time of the first task YML :02:27; End time is the end time estimated for the task YMLO0963; Progress Running; SLA Alarm 01:21:51 (the estimated end time is greater than the time defined for the activity, but within the alarm time to the end of the activity); Schedule Delayed to start 01:02:27 minutes. Dino Explorer Suite User's Guide v

87 Example 6: Activity-start/end Bank Charges-18:00/22:00 Bank Checking-23:50/05:00 Credit Card-23:00/04:00 Insurance-19:00/04:00 Loans-23:00/07:00 Date/Time reference 03/Jun/14 09:35 The activities Bank Charges, Bank Checking, Insurance and Loans were completed successfully and within the specified SLA. Credit Card Start time of the first task YML :02:27; End time is the end time estimated for the task YMLO0963; Progress Running; SLA Fatal 01:38:10 (Activity exceeded estimated time and SLA threshold indicating fatal status for activity); Schedule Delayed to start 01:02:27 minutes. Dino Explorer Suite User's Guide v

88 Monitoring Filter As you saw in the above examples the amount of activities and tasks that will be monitored can be large. To simplify the events that will be displayed in the monitoring processes, you can use the filters option. Using this option the user can choose which events will be displayed. In the figure below we have the relation of all events. You can apply filters if you have too many activities to monitor. Use the scroll bars and sort by columns clicking on the column header. In this example, we select filter activities with status Error, Late, Alarm and Delayed and click Ok button. All the other activities status will not be shown. Notes Error - This status occurs when task ends in error or fatal condition. Alarm - This status occurs when the SLA time of the activity enters the alarm or fatal condition. The activity is displayed only if the status occurs in the Progress, SLA, or Schedule fields. Filters can be applied to activities and tasks. Dino Explorer Suite User's Guide v

89 Monitor Tasks Activities You can see the tasks of an activity as follows: Double click on the activity (drill down)(1); Open the tasks in a new window; Run activity (F5) Note (1) - If you use drill down option the active filters in the window will be kept in the task window As in the activity you can see this information in detailed or summary form. You can toggle the view mode between Summary and Detailed in the task window by right-clicking and choosing Switch view. You can switch between views on the same way. Bellow we show the two views: Summary view Detailed view We will now describe the fields displayed in the task window: Name Header name defined during activity registration. Start time Start time value of the activity if it has already started, or the time window. End time End time value of the activity if it is already finished or the estimated value for the end of the activity (see expected end time and duration below). Elapsed time Total time spent by the activity or the time spent so far. The Progress This field provide the activity status indicator for each task. Note The SLA, and Schedule are valid only for the activities Dino Explorer Suite User's Guide v

90 You can get details about any task of any activity, just click on the option open execution detail, for better understanding we split the window into color frames. Yellow frame - contains information about the task when it is included in the activity. This information is static, will only be changed if there is a manual change in some field of the task or the task is excluded and re-included in the activity. Green Frame - contains the same task information displayed in the detailed task window. Blue Frame - contains the resource consumption information of the task so far (running) or the total resources consumed (completed with or without exception). Orange frame - contains some information about the task Type = Job or JobStep. This information will be displayed, if the task is running or completed with or without exception. Inside the framework, we have links to job number and steps information. Just click to get this information. Job number information This information will only be displayed after the job has ended. Dino Explorer Suite User's Guide v

91 Steps information - Will display the information you have so far on the steps. Dino Explorer Suite User's Guide v

92 User Input You can monitor activities that not runs on the Mainframe environment, like personals activities that you have in the open platform. To make it possible first we need to register this activity. To create this activity, we have two forms: User input interface or Dinotask interface User input Interface The screenshot below shows the User Input tool s interface: You can create this events by this interface just setting the event s information on the fields, like Task name, Start time, End time, Status, User name, Source and Description. The events will be recorded on database by a script. Dino Explorer Suite User's Guide v

93 Dinotask Interface Syntax Creating a task After creating the event simply include it as a new activity within a folder or include it as a new task in an activity already registered. We setting the task information Header, Name, Relevance and Duration. Note: If we are including the task in a new activity, we must also fill in the red fields for the SLA. Dino Explorer Suite User's Guide v

94 Force task status You may need to set manually the status of a certain task to fix exceptional conditions like a job supposed to run is that is not going run this time or an abended job is replaced by another action and you do not want the monitor showing that occurrence. So, you can use the Force task status tool to create an artificial execution status: You just must give the task s information like, SID, Job name, Comp. Code and if you want to force a step status, you just click in Step name check box and write the Step name on the field below. After this, you click in button. Dino Explorer Suite User's Guide v

95 Executions The executions menu reports are listed below: Icon Report Description Job executions Step executions Summary per partition Summary per sysplex Total Summary User input events List of jobs executions List of step executions View jobs executions summary per partition View jobs executions summary per sysplex View jobs executions total summary List of user input events Job executions Job execution tool is used to view detailed information about jobs and steps submitted to be executed at mainframe environment. This tool shows records loaded into the historical database and can be used to validate services levels agreements. These records are presented inside a visual grid panel and can be organized according to the need for better viewing. Perform queries that return records about jobs executions. Values like SID, Job number, Job name, start time, End time, Class, Completion Code, ABEND, Program name, EXCPs, Service Units, CPU time can be analyzed with this tool. See below a screenshot of Job Executions tool: The output from this query is a job execution records. Use Filter tab to apply filtering elements to a query. Dino Explorer Suite User's Guide v

96 If the user double-clicks any field on a line, you will see the information of all steps executed by the job. In job executions, you can query the messages written to Syslog by a job name or by a job number. Query result in Syslog Note: For more details about system messages see Syslog description later in this manual. Dino Explorer Suite User's Guide v

97 Step executions Perform queries that return records about the steps executions of jobs. Values like SID, Job number, Job name, Step number, Start time, End time, Step Name, Program name, EXCPs, Service Units, CPU time can be analyzed with this tool. See below a screenshot of Step Executions tool: The output from this query is a lot step execution records. Use Filter tab to apply filtering elements to a query. Dino Explorer Suite User's Guide v

98 Summary per partition Perform queries that return records about the executions for jobs, steps termination or intervals (depending the scope selected). Information will be grouped by Sysplex, Systemname, SID and End time. The output from this query is a lot execution records. Use Filter tab to apply filtering elements to a query. Besides the grouping fields, other fields such as, Start time, End time, Executions, EXCPs, Service Units, CPU time will also be displayed. The user can also use the drill-down option (double click on a line in the answer area) to have the detail per job and repeat the drill-down process in the new window and view the information by step. See below an image of the return of the query: Drill-Down Dino Explorer Suite User's Guide v

99 Summary per sysplex Perform queries that return records about the executions for jobs, steps termination or intervals (depending the scope selected). Information will be grouped by Sysplex and End time. The output from this query is a lot execution records. Use Filter tab to apply filtering elements to a query. Besides the grouping fields, other fields such as, Start time, End time, Executions, EXCPs, Service Units, CPU time will also be displayed. The user can also use the drill-down option (double click on a line in the answer area) to have the detail per partition and repeat the drill-down process in the new window and view the information by job and after per step. See below an image of the return of the query: Drill-Down Dino Explorer Suite User's Guide v

100 Total summary Perform queries that return records about the executions for jobs, steps termination or intervals (depending the scope selected). Information will be grouped by End time. The output from this query is a lot execution records. Use Filter tab to apply filtering elements to a query. Besides the grouping fields, other fields such as, Start time, End time, Executions, EXCPs, Service Units, CPU time will also be displayed. The user can also use the drill-down option (double click on a line in the answer area) to have the detail per sysplex and repeat the drill-down process in the new window and view the information by partition and after by per job and after by step. See below an image of the return of the query: Drill-Down Dino Explorer Suite User's Guide v

101 User input events Perform queries that returns records about the User inputs events. Values like Task name, Start time, End time, Description, User name, Task status and Source can be analyzed with this tool. See below a screenshot of User input events tool: The output from this query is a lot of User input records. Use Filter tab to apply filtering elements to a query. Dino Explorer Suite User's Guide v

102 Job chain The job chain menu reports are listed below: Icon Report Description From jobname From dsname From VSAM dsname Builds a job dependency diagram given a job name Builds a job dependency diagram given a dsname Builds a job dependency diagram given a VSAM dsname Job chain overview The Job Chain tool challenge is to realize the reverse engineering of jobs executions and creates the dependency diagram from an initial job name informed by the user, without requires any integration with other mainframe products, schedulers or needs external data importation. Job Chain performs this task looking for datasets written and read by jobs. The engine works reading the historical database, parallel queries are used to increase the performance, and the job chain is built interpreting the results to know who job depends on the other. The final view is a chain of linked jobs, separated by their respective datasets used in input and output operations. From jobname Builds a job dependency diagram given a jobname. You can use Filter tab to apply filtering elements to a query. See below a screenshot of Job Chain by jobname tool: Dino Explorer Suite User's Guide v

103 From dsname Builds a job dependency diagram given a DSname. You can use Filter tab to apply filtering elements to a query. See below a screenshot of Job Chain by DSname tool: From VSAM dsname Builds a job dependency diagram given a DSname. You can use Filter tab to apply filtering elements to a query. See below a screenshot of Job Chain by DSname tool: Dino Explorer Suite User's Guide v

104 Syslog The syslog menu reports are listed below: Icon Report Description List details Job SYSLOG List SYSLOG by JobNumber SYSLOG LPAR Summary List raw data from syslog List SYSLOG records by jobname List SYSLOG records of a certain job Summary of SYSLOG messages per LPAR Syslog contains the routing and descriptor codes that IBM assigns to the messages that z/os components, subsystems, and products issue. Routing and descriptor codes are specified by the ROUTCDE and DESC keyword parameters on WTO and WTOR macros, which are the primary methods that programs use to issue messages. The routing code identifies where a message will be displayed. The descriptor code identifies the significance of the message and the color of the message on operator consoles with color. See below an example of a syslog screen on the mainframe Dino Explorer Suite User's Guide v

105 List details When you select the list details option you will see the screen below where you can use the filter fields to perform your search on Syslog information. Note: To search for a piece of text in a message just place the text to be searched between the percent (%) character, you can also set a single percentage to prefix or post the message text. Query Result examples Dino Explorer Suite User's Guide v

106 Job SYSLOG This query allows you to search system messages for a specific job name or use the percent character in job name. List SYSLOG by JobNumber This query allows you to search system messages for a specific job number. Dino Explorer Suite User's Guide v

107 SYSLOG LPAR Summary This query allows you to search system messages for a SID. The query returns the number of messages written to this SID. Dino Explorer Suite User's Guide v

108 Common Features This section presents the common features used by all Dino Explorer products. These features allow products to build their principal purpose: create and submit queries to Dino database and show the resulting data in a tabular way. The table below lists these common features: Icon Title Description Query interface Reports menu item Options menu item Help menu item Exit menu item Dino Explorer window to input query parameters, submit queries, display the results and work with them Allows users to save queries data entry as a report, to be used other times Contains advanced tools like filters and grouping editors, database connection settings and profiles definition Shows product documentation, license information and product details Closes all product child windows opened and exit Dino Explorer Suite User's Guide v

109 Query interface The query interface is a user-friendly window used to query Dino database data. With query interface users, can input data entry information to instruct the query engine to return the desired data. This input information may be a simple range of dates even a sophisticated filter with complex logic. Query interface also lets users to manipulate the result data, sorting their columns, rearranging columns orders, choosing what columns will be visible, exporting the whole data to MS Excel or to a file, and so on. The query interface can be seen in picture below: The query interface is composed of: 1. Filters controls; 2. Results grid; 3. Status bar. Next, each component of the query interface will be explained in deep. Dino Explorer Suite User's Guide v

110 Filter Controls Filters controls are the query interface component that allow users to input information to select data from Dino database according to their needs. The filter control can be hide/show by clicking in: ( ) button located at top-left corner of the filter control. The following information can be entered to filter data in queries: Type Start/End date and time to restrict result data to the desired period; Select a filter to limit query result data returned; Inform a grouping to arrange the resulting data in groups; Input a parameter value related to a query type, represented here by "Job name"; Choose the periodicity to group rows in hours, days, weeks... Set scope of SMF record type 30 to "On the period", "Finished at" or "Intervals"; Notify which field will be the top field for "Top" queries; Define which field will be the field target for "Grid" queries. There are a few types of filter controls. They are: Icon Description Enter a date and time value in formats: DD/MM/YYYY HH:MM:SS or MM/DD/YYYY HH:MM:SS, according to your location settings or just click in calendar button to pick a date and time from a calendar. Start: If informed, the query returns only records with start date greater than or equal to this value, otherwise it does not affect query results. End: If informed, the query returns only records with start date less than or equal to this value, otherwise it does not affect query results Select an existing filter from a list. Also, is possible to create new filters from here. Select< New simple filter >, < New custom filter > or < New multi-filter > and click in "Edit" link to create a new filter. To edit an existing filter, select the filter from list and click in "Edit" link. Select an existing grouping from list. It is possible to create new grouping from here. Select < New group > and click in "Edit" link to create a new group. To edit an existing group, select the group from list and click in "Edit" link. Some queries can optionally allow users to input a field value that will be added to the query sentence. The example below shows a parameter control that asks for a program name field value Select periodicity and scope event parameters which should be applied to the query. Button to run the query using the parameters provided by the filter fields. Dino Explorer Suite User's Guide v

111 Periodicity control: Select periodicity and scope event parameters which should be applied to the query. The periodicity control is shown below: The available values for periodicity are: Periodicity Description LoadedData No periodicity set. Returns a single line with the total summary AnyTime Group the results by any time the events happen. Useful to see RMF data occurs in intervals such as 5, 10, 15, 30 minutes Minute Group the results per minute interval TenMinute Group the results on 10-minute interval: , , Hourly Group the results per hour s interval Daily Group the results per day Weekly Group the results per week of the year interval Monthly Group the results per month intervals Yearly Group the results per year intervals MinuteOfDay Group the results per minute of the day. You get at most 1440 minute intervals HourOfDay Group the results per hour of day. You get at most 24 intervals WeekDay Group the results per day of week: Mondays, Tuesdays You get at most 7 days DayOfMonth Group the records per day of month You get at most 31 intervals Scopes select the source of data and currently it is only valid to CPU Explorer queries. All other products users may always select FinishAt scope: Scope Description Finished at Default scope. Normally all SMF events refers to the end time of the event such as close time, job end. Intervals Valid only in CPU Explorer, specify you want to see the interval records. Use this scope when you want to see an interval of time mainly for long time running jobs like started tasks (STC s) On the period Obsolete scope. Used before the support of Interval records, where you select just short time running jobs. i.e.: the jobs must start and end on the same period (hour, day, month) Note: You cannot select a periodicity smaller than the one you have loaded. Periodicity is composed by "Periodicity interval" and "Scope event". Periodicity interval is the granularity of historical data, like hourly, daily, weekly, monthly... Scope event is how the historical data is acquired, by records generated from "Intervals" or "Finished at". Scope event are relevant only to CPU Explorer and IO Explorer products that uses SMF record type 30. The "On the period" scope event is obsolete and remains here only for backward compatibility. Dino Explorer Suite User's Guide v

112 Result grid The result grid is the query interface component where query results are displayed. Data are organized in columns and rows. Columns are made by fields. Rows are made by records values. Columns can be added, removed, positioned and sorted. Some result grids also display the last line as a "Total line", that shows column values calculated according to its field data type. Furthermore, many other actions can be performed in this control. These actions will be explained in sequence. The result grid picture is shown below: The actions that can be done in result grid are: Sorting rows: Users can sort rows ascending or descending clicking in a column. To do it click in a column header one time to sort rows in ascending order (The symbol: appears). Clicking again will sort rows in descending order (The symbol: appears). The sort operation is shown below: Moving columns: Users can change the position of the columns. To do it drag a column header and drop it in the new desired position. The moving operation is shown below: Dino Explorer Suite User's Guide v

113 Select fields: Fields can be added or removed. Doing a right mouse button click in anywhere over result grid, will open the context menu. Choose the "Select fields" menu item to open the window shown below: Select which fields you want to display and click in "Ok" button to update result grid with new settings. Save as report: The query generated with specific input data, parameters values, filters, grouping, and periodicity, can be saved to be used more than once. Query interface save this information as a report. Choose the "Save as report" menu item to open the window shown below. Type the report name and title and click in "Ok" button to save it. Dino Explorer Suite User's Guide v

114 Open with excel: Another great feature of result grid is the capability to export data to MS Excel. Choose the "Open with excel" menu item to open the result grid information with MS Excel. See below how it works: In a few seconds a screen like that shown above will be opened. Export to file: Data in result grid also can be exported to a formatted text file. Users may customize some attributes to generate this file as shown below: Select the choices what you want and click in "Ok" button to create the file. Dino Explorer Suite User's Guide v

115 Drill down: If users do a double-click on a result grid line, a new window with this line information will be opened, and a detailed query about this line will be executed. This process is called: "Drill down". Not all queries have this ability. If drill down is available. The picture below illustrates what happens: In the example, above, the query: "Total summary" had the line indicated by the job name "ACADTU3" double-clicked. This event throws the drill down to more detailed query: "Summary per sysplex", that opened a new window and ran the query. Dino Explorer Suite User's Guide v

116 Context menus: Some fields have some shortcuts to other queries through the context menu facility, just pressing the right button of the mouse. Dino Explorer Suite User's Guide v

117 Totals line: Total line does not appear in all queries. Total line, when present, is the last line of result grid. The main function of total line is to summarize all columns values in one cell, according to the column data type and field behavior. The total line is shown below: The summarization rules are described in table below: Column Start time End time Number Time stamp Text Boolean Summarization Minimum start time value of all column values Maximum end time value of all column values Sum of all column values Sum of all column values Nothing to do Nothing to do Note: All actions performed in result grid are executed in memory, using the query result data already returned. This means that these actions do not submit queries again to the database. Dino Explorer Suite User's Guide v

118 Status bar The status bar is located at the bottom of the query interface. Status bar shows information about the query, about the connection and allow users to perform zoom operations in result grid. The status bar is shown below: If user clicks in "Active query" link, the T-SQL statement of the query executed will be displayed. The figure below shows the active query window: If user clicks in "Active filter" link, the T-SQL statement of the query executed represented by the WHERE sentence will be displayed. The figure below shows the active filter window: Dino Explorer Suite User's Guide v

119 There are five text information shown on the status bar correspond respectively to: Connected server name, represented here by value: ; Connected user name, represented here by value: sa; Connected database, represented here by value: 4bearsDB; Query elapsed time, represented here by value: 00:00:02; Query rows returned, represented here by value: 21. And finally, users can perform zoom operations through zoom control located at right-most part of status bar. Users can do zoom operations according to the table below: Zoom item Action Decrease zoom in 5%. Shortcut: CTRL + Mouse Wheel backward Increase zoom in 5%. Shortcut: CTRL + Mouse Wheel forward Select predefined zoom factors. Shortcut: CTRL + Mouse Wheel button to select 100% Dino Explorer Suite User's Guide v

120 Filters The options menu item allows you to control advanced features of Dino Explorer products. You can create, modify, or delete filters and groupings. You can also define individual profiles to give exclusive access over filters, groupings, reports and so on. And finally, the options menu item allows you to manage connections with Dino databases. The options menu item and its submenu items are shown below: This menu contains the following submenu items: Simple filter Custom filter Multi-filter Grouping Profile Database connection Opens simple filter editor window. Opens custom filter editor window. Opens multi-filter editor window. Opens grouping editor window. Opens profile manager window. Opens database configuration window. Dino Explorer Suite User's Guide v

121 Simple filter Use simple filters to create search conditions made only by simple comparison statements. The equality operator '=' is the only comparison operator available for simple filters. If values are assigned more than once to a field, these comparison statements are connected by the logical operator OR. If more than one fields are assigned to the filter, these comparison statements are connected by the logical operator AND. The simple filter editor window is shown below: Actions to do with simple filters: Creating a new simple filter: To create a new simple filter, click in 'New' link button. Editing an existing simple filter: To edit an existing simple filter, select the filter in the 'User filter' list. Saving a simple filter: To save a simple filter click in 'Save' link button. To save the simple filter with another name click in 'Save as' link button. Dino Explorer Suite User's Guide v

122 Excluding a simple filter: To exclude a simple filter, click in 'Delete' link button. Setting a simple filter as current: To set a simple filter as current click in 'Set as current' link button. Setting a simple filter as current will turns the editing simple filter the default filter for the query in use. Use this feature for temporary purposes, when changes can be discarded. Changes made in simple filter as current will exist only in memory. Selecting a target: Select a target in the 'Target' list. A target is related to a Dino database table. The target defines which fields are available to be used and it is different on each Dino Explorer product. For more information about targets see Dino Explorer products chapter later in this guide. Working with simple filters: Select a field in the 'Filter fields' list. Add values to selected field in 'Values' text box and click in ( ) button. You can also paste values from clipboard. To assign an empty value to a field, just keep 'Values' text box empty and click in ( ) button. To remove a value added to a field select the value(s) in 'Selected values' list and press < DEL > key or ( ) button. Dino Explorer Suite User's Guide v

123 Custom filter The custom filter should be used when a simple filter cannot help you. A custom filter has all tools that you need to build a complex filter. The custom filter editor window is shown below: Actions to do with custom filters: Creating a new custom filter: To create a new custom filter, click in 'New' link button. Editing an existing custom filter: To edit an existing custom filter, select the filter in the 'User filter' list. Saving a custom filter: To save a custom filter click in 'Save' link button. To save the custom filter with another name click in 'Save as' link button. Dino Explorer Suite User's Guide v

124 Excluding a custom filter: To exclude a custom filter, click in 'Delete' link button. Setting a custom filter as current: To set a custom filter as current click in 'Set as current' link button. Setting a custom filter as current will turns the editing custom filter the default filter for the query in use. Use this feature for temporary purposes, when changes can be discarded. Changes made in custom filter as current will exist only in memory. Selecting a target: Select a target in the 'Target' list. A target is related with a Dino database table. The target defines which fields are available to be used and it is different on each Dino Explorer product. For more information about targets see Dino Explorer products chapter later in this guide. Working with custom filters: Custom filters have tools to help you to create complex filters. The custom filter tools will be explained following: - Field group: The field group allows you to create comparison statements. A comparison statement is formed by a field, a comparison operator and a value. To create a comparison statement first, select a field in 'Field' list, then select a comparison statement in 'Condition' list, finally enter the value in 'Value' text box. Click in ( ) button to insert the condition statement into the custom filter. When you select a field, all comparison operators allowed for that field data type is loaded automatically in 'Condition' list. The field group is shown below: - Logical group: The logical group allows you to use logical operators to connect comparison statements. The ( ), ( ) and ( ) buttons insert the related logical operator into the custom filter. The ( ) and ( ) buttons insert parenthesis into the custom filter. The logical group is shown below: Dino Explorer Suite User's Guide v

125 - SQL group: The SQL group allows you to insert T-SQL code into the custom filter, click in ( ) button to do it. The text typed will be used in the WHERE clause of the query generated by this custom filter. The SQL group is shown below: When you click in ( ) button the window below will be shown. Enter the T-SQL statement in the text box and press ( ) button to add the expression into the custom filter. - Formatting group: The formatting group allows you to format the custom filter text. When custom filters became bigger, the visualization turns poor. Use formatting group controls to help you to understand the custom filter code better. The ( ) button increases the line indent. The ( ) button decreases the line indent. The ( ) button adds a line-break into the custom filter. The formatting group is shown below: - Delete group: The delete group allows you to exclude lines added to the custom filter. Click in ( remove all selected lines from custom filter. The delete group is shown below: ) button to - Moving group: The moving group allows you to move selected lines from custom filter. Click in ( ) button to move selected lines up. Click in ( ) button to move selected lines down. The moving group is shown below: Dino Explorer Suite User's Guide v

126 - Filter items list: The filter items list is the container of all custom filter items. Each individual custom filter item is representing by one line in this container. You can select more than one line at the same time. To select multiple lines, click in the desired lines pressing 'CTRL' key or 'SHIFT' key. When you copy, selected lines using 'CTRL+C' keys, the XML code related to these items is copied to the clipboard. You can paste the custom filter lines from clipboard to another custom filter. The filter items list is shown below: - SQL text box: The SQL text box is the place where you can see how your custom filter will be submitted as part of the WHERE clause of the query. When you copy the text in this control, the SQL code is copied to the clipboard. You cannot input text in this place directly, this control is read only. The SQL text box is shown below: Dino Explorer Suite User's Guide v

127 Multi-filter Use multi-filters to join existent simple filters and custom filters as a new one. The multi-filter editor window is shown below: Actions to do with multi-filters: Creating a new multi-filter: To create a new multi-filter, click in 'New' link button. Editing an existing multi-filter: To edit an existing multi-filter, select the filter in the 'User filter' list. Saving a multi-filter: To save a multi-filter click in 'Save' link button. To save the multi-filter with another name click in 'Save as' link button. Dino Explorer Suite User's Guide v

128 Excluding a multi-filter: To exclude a multi-filter, click in 'Delete' link button. Setting a multi-filter as current: To set a multi-filter as current click in 'Set as current' link button. Setting a multi-filter as current will turns the editing multi-filter the default filter for the query in use. Use this feature for temporary purposes, when changes can be discarded. Changes made in multi-filter as current will exist only in memory. Selecting a target: Select a target in the 'Target' list. A target is related to a Dino database table. The target defines which fields are available to be used and it is different on each Dino Explorer product. For more information about targets see Dino Explorer products chapter later in this guide. Working with multi-filters: If you do a double-click over an existent filter in the 'Filters' list, the selected filter will be added to the multi-filter. To remove one existent filter from the multi-filter, select the filters in 'Selected filters' list and press 'DEL' key. All filters added will be connected by the logical operator 'AND'. If the check box ( checked all filters added to the multi-filter will be negated. ) is Filter conversions Simple filters and multi-filters can be converted to custom filters. The 'Save as' link button displays the following window which has a custom filter conversion option: This option will convert the selected filter to a custom filter type. This feature helps modifying and extending a filter in a flexible way. Note: There are two special characters to be used with filters values. The character '%' means that all characters forward will be accepted. Example: The value: 'DX%' means that all value that starts with 'DX' will be returned. The character '_' means that all character in this position will be accepted. Example: The value: 'DX_' means that all value that starts with 'DX' and has any character in the third position will be returned. Dino Explorer Suite User's Guide v

129 Grouping Users can select one or more fields to be part of an aggregate feature called grouping. Grouping will be applied to de GROUPING BY clause of the queries to obtain summarized information. Like user filters, grouping can persist on the Dino database for further use. Grouping can be combined with pre-defined group fields and other customs fields created by users. The grouping editor window is shown below: Actions to do with groupings: Creating a new grouping: To create a new grouping click in 'New' link button. Editing an existing grouping: To edit an existing grouping select the grouping in the 'Grouping' list. Dino Explorer Suite User's Guide v

130 Saving a grouping: To save a grouping click in 'Save' link button. To save the grouping with another name click in 'Save as' link button. Excluding a grouping: To exclude a grouping click in 'Delete' link button. Selecting a target: Select a target in the 'Target' list. A target is related to a Dino database table. The target defines which fields are available to be used and it is different on each Dino Explorer product. For more information about targets see Dino Explorer products chapter later in this guide. Working with groupings: Do a double-click over a field in 'Fields' list or over 'Custom fields' list to add the field to the grouping. To remove fields added to a grouping, select which fields you want to remove and then press 'DEL' key. There are a special kind of fields called custom fields that can be used in a grouping. There are two types of custom fields, the multi-field and the substring field. The custom fields will be explained following: - Multi-field: The multi-field is a custom field that allows users to join fields or substring fields to work as one. The result is a concatenation of these fields values to be used in a grouping. Strings literals can be added as part of a multi-field also. To open the multi-field editor window, do a right mouse button click over the 'Custom field' list to open the context menu and select 'Edit multi-field' or 'New multifield' menu items as shown below: Dino Explorer Suite User's Guide v

131 The multi-field editor window is shown below: Double-click over a field in 'Fields' list or a substring field in 'Substring fields' list to add it to the multi-field. To remove added fields, select the desired fields to be removed in 'Selected fields' list and press 'DEL' key. To add a string literal to the multi-field just type the value in a free space in 'Selected fields' list and press "ENTER'. Dino Explorer Suite User's Guide v

132 - Substring field: The substring field is a custom field that allows users to set part of string fields values instead of their entire content to be used in a grouping. To open the substring field editor window, do a right mouse button click over the 'Custom field' list to open the context menu and select 'Edit substring field' or 'New substring field' menu items as shown below: The substring field editor window is shown below: Select a field in 'Source Field' list, then type a number (1 to N) to set the start position at the string value in 'Start position' text box, finally type a number (1 to N) to set the length of characters to be returned, beginning from start position in 'Length' text box. Dino Explorer Suite User's Guide v

133 Reports Reports are helpful when a certain query is called many times. If you want to execute queries periodically, you can save these queries as reports. To learn how to save queries as report, see the corresponding text in "Query Interface" chapter. The option menu item is shown below: To call a report, click in "Reports" menu item on Dino Explorer product. The follow window will be shown: Select the desired report in list and click in "Open" button. This will open the query interface window to submit this report to Dino database with all parameters typed previously. If you want to exclude a report, select it in the list and then click in "Delete" button. Dino Explorer Suite User's Guide v

134 Configuration Database connection Dino Explorer products need to be connected to a valid Dino database to work properly. Therefore, more than one Dino databases could be available to be connected at your company. Access this option to configure the correct connection attributes, test connection and set the default connection to all Dino Explorer products. The database connection window is shown below: How to configure Dino database connection: Field Default value Description Database provider MS-SQL Server Choose 'MS-SQL Server' if your database server provider is the Microsoft SQL Server. Choose 'Greenland' if your database server provider is the EMC Greenplum data computing appliance. Server Name localhost The Server name could be a DNS or an IP address. If you use SQL instances: DINOSRV\SQLEXPRESS If the connection requires a port address, inform the port address after server name followed by comma. Authentication Windows Authentication Select the database server authentication mode. If the authentication is made by the database server, select 'Database Authentication'. If the authentication is made by the operation system (Windows), select 'Windows Authentication' User Name sa If database authentication is selected, type here a valid Dino Password database user name If database authentication is selected, type here a valid password related to the Dino database user Database dinodb Type here the Dino database name to connect Dino Explorer Suite User's Guide v

135 Test Connection: Click in this button to test the connection attributes and enable the ( changes. To confirm changes and set the connection to Dino database, click in ( ) button to confirm ) button. Profile By default, changes made in resources like filters, groupings and reports are shared by all Dino Explorer users. To grant exclusive access to these resources, users can create different profiles to keep their changes protected. The profile manager window is shown below: Help To create a new profile just type the profile name in 'User Profile' text box and click in ( ) button and confirm the operation. The new profile will be created and set as the current profile. All changes made from now will belong to this profile and only this profile would access these resources. Changes made in the default profile named '< Common >' are shared with all profiles. In this menu item, it's possible to find detailed information about Dino Explorer products. The help submenu items are shown below: Icon Menu Description Help License About This option shows a quick reference guide about each Dino Explorer product main features. Users can find here short documentation, with hypertext navigation to solve questions like how to use the product interface, how to create filters, how to submit queries or about a specific query results, for example. At any time using Dino Explorer products the 'F1' key can be pressed to show related topic in help document Click in this menu item to display information about the Dino Explorer license agreement. Important information like expiration date can be found here Click in this menu item to display information about the Dino Explorer product opened Dino Explorer Suite User's Guide v

136 Import and Loading Data Data Loading is the process to import the mainframe events on the Dino DB and the consolidation process of the raw data into the historical views. You can load data in three alternative ways: Interactive interface (GUI) to all data loading functions Windows service to connect directly to the mainframe and download the events in real-time Command line interface (CLI) to perform pre-defined functions and normally used to schedule load views tasks Import and Load Process We took the CICS Explorer to illustrate the loading process, that is similar to the other products of the Dino Explorer Suite. The following diagram shows a two-phase process to load the mainframe events, i.e. SMF records or any other information, into the DinoDB database: The mainframe events are the transaction execution details of each CICS transaction terminals execution is inserted on the CICS importation table (cicsdata table), where each transaction is one row on the table. The following grid show an example: Dino Explorer Suite User's Guide v

137 The CICS raw events (CICS terminals) are inserted on the cicsdata table (importation table). At a certain time, the information into the cicsdata table is consolidated in periodicity defined by the installation, such as per hour, and this data is inserted on the historical table cicshist. Depending on the type of event, the historical table can save a lot of space: In this case, we are consolidating similar terminals on the same record. So, we just use 3% of the total size to keep the same information, allowing to keep a much longer history. Dino Data Loader is a program that joins functionalities to execute data importation and load views to all products of Dino Explorer family: CPU Explorer, Dataset Explorer, DASD Explorer, I/O Explorer and CICS Explorer. Each one of these programs can import and charges its own data. The Data Loader can help you to simplify and optimize this process. Dino Data Loader In Data option, you have some functions see figure below. Dino Explorer Suite User's Guide v

138 Import from MVS Server Consist in rapidly transfer the gross data retrieved from MVS Server on the mainframe to some intermediary tables called cpudata, iodata, dsdata, vsdata and cicsdata, depending on which views you intend to charge. You can import data from MVS servers by the interface below. By this interface you can choose the MVS servers you want to import and select the options you want. Dino Explorer Suite User's Guide v

139 Import from CSV File Importing the SMF data into the database consist in rapidly transfer the gross data retrieved from SMF on the mainframe to some intermediary tables called cpudata, iodata, dsdata, vsdata and cicsdata, depending on which views you intend to charge. So, the initial step on the importing process is select which products you intend to be load. To do it, check the box on each product, to indicate if the next operations will act over them; Then check the box to indicate whether is necessary clean the previously imported tables. Next, you could indicate if are you will import gross data to intermediate recipients. If you check this item, you must use the tab Files to select which files will be imported. Dino Explorer Suite User's Guide v

140 The 3 td consist to indicate if necessary to delete duplicated data, eventually is charged at intermediate tables; In this phase, check the box to indicate whether you intend to load products views. If yes, use products views tabs (CPU views, I/O views, Dataset views and/or CICS views) to set parameters for each one. The last phase is responsible to clear the importation tables and recreate all indexes on historical tables: Dino Explorer Suite User's Guide v

141 Selecting files to import Use the tab files to set the list of files to be import and set other parameters. You may import many files before proceeding on the load phase. It will depend of the policy you establish on your installation. Some examples: A single CSV file contains all the SMF data to load; One CSV file per partition or per Sysplex complex; One CSV file per day and I want to consolidate them once a week; A mixed way. There is only one rule: If you load the same data twice, you will have these events counted twice. We provide you with some facilities to avoid any trouble: 1. Visualize the imported data. You can summarize per day and partition to check if is missing a certain day or if the period corresponds to what you expect; 2. A validating function to check if there are duplicate events on the intermediate tables; 3. Visualize the duplicate data found on the previous step; 4. Remove the duplicate entries. 5. You can always check on the historical views if that data has already been loaded; 6. As last resort, you can remove (like undo) the load operation through the Actions interface. In addition, you can specify the number of concurrent threads and size of each dataset to be inserted at each database operation. If you want to import data, that operation will be executed before loading views. Dino Explorer Suite User's Guide v

142 Load Views After you have imported the CSV files to the intermediate tables, you can load the data in all available views through the Load Views tab of the Load mainframe data menu. Load views are the process of consolidates the imported data on the historical databases. This process will feed the various views reported by the products with the new data imported on the intermediate tables. There are two kinds of views: Historical view Execution view The information is grouped in period intervals selected by the user and can be kept for long periods of time, or even compacted in larger intervals. There is a record for each execution and these visions tend to be very large consequently they are kept for short period of time. The step vision is even larger than the job execution, because normally each job has many steps. For each Dino Explorer product, you need create one or more load view task s to consolidate the information. After that you can save de configuration giving a name for it: Dino Explorer Suite User's Guide v

143 Some things about collection interval and scope event The historical views are compiled records accordingly to two very important parameters: collection interval and scope event; collection interval is the grouping factor for the view, i.e. is the granularity of my historical. If I need to know my view in periods of 1 hour, I select hourly period and so on: For example, if you select the period interval as monthly, the whole imported data will be summarized as monthly records, however, if you select the interval as daily, the system will summarize records for each day found on the intermediate tables (cpudata, iodata, dsdata, vsdata and cicsdata) and afterwards you can see summarized in any of the other intervals: daily, weekly and monthly. So, users may select the smaller interval that you intend to see on the interfaces. As general rule: smaller is the interval selected, more database records will be created on the database. Dino Explorer Suite User's Guide v

144 Scope event is a mode meaning that events (jobs and steps) will consider just the event end time (finished at) or, it will be considered both times: the start and end time of the event. On the timeline bellow we present three job executions and consider an hourly period interval (per hour): 22:28-23:42 Job 1 23:05-23:55 Job 3 23:00 24:00 23:12-23:52 1/3/ :00 PM job 2 1/4/2007 1:00 AM Scope event Interval 23hs Observations On the period Finished at Job 2 (23:12 till 23:52) Job 3 (23:05 till 23:55) Job 1 (finished at 23:42) Job 2 (finished at 23:52) Job 3 (finished at 23:55) All events across interval period boundaries are lost such as Job 1. This mode is useful for load management. It is the default mode. It is more precisely because no event is lost. Good for account and billing purposes. Note: SMF record resource utilization at the end of steps and jobs, however, some jobs run for days or even months such as JES2 and RACF STC s (started tasks), they will probably run for the entire lifetime of an IPL. This situation leads to an enormous amount of resources consumed by these jobs on IPL days. Normally you select the finished at option to not lose any event resource count. However, if you intend to see the load level during the days, you would select the on the period option. Keep in mind that any events that cross period interval are discarded. If you select a daily period interval, all jobs that cross mid-night will not be counted. You can have both views selecting two load views: one for on the period events and another to finish at events. Dino Explorer Suite User's Guide v

145 CPU Explorer Views There are currently the following views on the CPU Explorer: CEC Activity History CPU Job history CPU LPAR History CPU Programs History CPU Services History CPU Users History LPAR Activity History See below the appearance of the screen with this option selected: Considerations about periodicity and scope described above are the same for here. Dino Explorer Suite User's Guide v

146 I/O Explorer Views There is currently the following option to charge on the I/O Explorer: IO History By charging this option, the historic views listed below will be available for future use. LPAR historical view Jobs historical view Programs historical view Users historical view There are no views about specific executions. The next picture shows the appearance of the screen, after the unique option was selected. Considerations about periodicity and scope described above are the same for here. Dino Explorer Suite User's Guide v

147 Dataset Explorer Views There are currently the following views on the Dataset Explorer: Catalog History Dataset Sizes Inventory History NVSAM History SMS History Tape Usage VSAM History See below the appearance of the screen with this option selected: Considerations about periodicity and scope described above are the same for here. Dino Explorer Suite User's Guide v

148 Dasd Explorer Views There are currently the following views on the Dasd Explorer: CACHE History CACHE Volume History Channel Lpar History Device History Device Volumes History LCU Config History LCU History Volume Config History Volume History See below the appearance of the screen with this option selected: Considerations about periodicity and scope described above are the same for here. Dino Explorer Suite User's Guide v

149 CICS Explorer Views The CICS Explorer have a view by CICS History: See below the appearance of the screen with this option selected: Considerations about periodicity and scope described above are the same for here. Dino Explorer Suite User's Guide v

150 DB2 Explorer Views The DB2 Explorer have a view by Db2 Accounting History: See below the appearance of the screen with this option selected: Considerations about periodicity and scope described above are the same for here. Dino Explorer Suite User's Guide v

151 Configuration Tasks In this chapter is described some housekeeping functions such as: How to free some database space; How to check the result of the importing and loading actions and how to revert (undo) a mistakenly load action. Administrative Tools Actions Mainframe datacenters have all sorts of sizes and complexities: A single SMF file for the whole installation; Separated SMF dump datasets per partitions or sysplexes; Merged SMF dump datasets; Daily, weekly and monthly SMF dump datasets; A mix of all these possibilities. Depending on the way your production needs or expectation will dictate the way you will load data on the Dino Explorer database. The Dino Data Loader was designed to permit you freely load any data on the historical views. Remember the rule: If you load the same data twice, you will have these events counted twice. To permit you to load the data in a freely way, the Dino Data Loader marks any data transformation as an Action: Import of mainframe data Load a historical view; Load an execution view; Job and step execution removal and so on. Each new action receives an Action ID and all data inserted through this action is tagged with this Action ID enabling you to reverse mostly of the Action if you have for example load the same data twice, or, your last load was incomplete you are much easier to you to undo the last load and reload again. You can list the Actions through the List Actions menu. The Action List is very helpful to check the execution status of the latest loads and data importation, as well as duration times. Initial Screen Dino Explorer Suite User's Guide v

152 Result Screen Action Status Table Icon Description Action completed with success. The action was canceled by the user. There was an error on the action execution. The action was not completed and the operation was rolled back. The action is in execution. Undefined state. This state should not occur. The action was removed (reversed) by the user command. Reversing Actions To reverse an action, you simple select the line corresponding to the action and press the Undo Action button whose is automatically enabled. You can select many actions in a single operation. At the end of the action reverse a dialog box will show the result of the reverse operation. Note: Not all Actions can be reversed such as Remove Job and Step Executions. Dino Explorer Suite User's Guide v

153 View Log All action and some important events that modify the historical and executions views are logged on the log table (Log). On the log, you can check the quantity of records inserted in any data importation, load view or on a deletion action: Initial Screen Result Screen If you want, you can export the result of the query from the Action list or View log to Excel or to a File. To do so just click on File and choose your option. Dino Explorer Suite User's Guide v

154 Swap data The Dino messaging needs to stop loading data on 'xxxdata' table to load the data to a historical table. This process may take a few minutes and during this time, there will not be real time processes in the table. What the Swap data does, is rename the 'xxxdata' table to 'xxxdata1' and the existent 'xxxdata1' table to 'xxxdata'. It allows the Dino messaging to keep loading new data on the table. All this process is very fast, and the sequence is: 1 - Stop importing from xxxdata 2 - Rename xxxdata to xxxdata1 3 - Rename xxxdata1 to xxxdata 4 - Start importing to xxxdata Now, you can execute loadviews from xxxdata1 Compressing and Purging Data We would like to save the information for a long time, but unfortunately this has an associated cost. So, we need a good planning in mind, about what information we will kept for longer than others. The dino database historical tables allow several types of information granularity inside them. For example: I can keep the information of the last 18 months summarized by the hour, for the next 18 months I kept per day and after that I kept per month and discard the information with more than 7 years. Once we have defined what we will kept and discard, the compress and purge data processes will allow us to implement them. You can compress some historic tables to free space in DinoDb database. The list of the products and historic tables Product Cics Cpu DataSet Io Historic Table CicsHist CpuJobs CpuLpar CpuUsers CpuPrograms CpuServices DsHist VsHist SmsHist IoHist Dino Explorer Suite User's Guide v

155 The available values for scope event are: Scope Description On the period Finished at Intervals Only events that start and complete on the period interval Any event that complete on the period interval Intermediary execution data based on SFM interval records Note: Keep in mind that you may need to repeat the compress and purge data process for multiple scopes and multiple tables. Compress data That is the Compress data interface On Actions tab, you can select product, target view (historic table) and scope event and click query button. This query return all scope actions executed on target view associated with scope event. Now, you keep selected the actions that will be part of the compression process. Dino Explorer Suite User's Guide v

156 On Options tab, you specify two thinks: New periodicity and Execute option Now you click in Execute Now button to initiate the process. After the process is finished you will see in the message tab the result of running and the conclusion notification message. Dino Explorer Suite User's Guide v

157 Purge records If you need delete records from some table on Dino DB Table, you can use the Purge Records option. You can select all products (click Select product) or select one or more products you want to purge on the Parameters tab and select the Load phase 2, Purge execution records. In each tab of each product, you can specify the period to purge records (from/to) or select how many days older records will be purged (purge data older than). In Add button you specify which tables will be part data deletion process. Dino Explorer Suite User's Guide v

158 After complete all specifications for purge process for all products. To run the records deletion process just click the start button. After the process is finished you will see in the message tab the result of running and the conclusion notification message. Dino Explorer Suite User's Guide v

159 If you want, it is possible to save the parameters used in Compress or Purge data for late use in graphical interface or command line - DinoUtil. To do this, simply click the File / Save As... option and type the name of the configuration that will be saved, as you can see on the screen below Dino Explorer Suite User's Guide v

160 Dasd Discovery In Dasd option you have some functions see figure below. Load configuration In this function, you must specify some fields to obtain the information about mainframe IO structure. Part One: You must specify what MVS server (mainframe) to connect and click in connect button (1). After clicking the button connect information from the mainframe ion structure will be received (5). Part Two: You specify the address range and online devices (2), show loaded devices messages or not (3) and click start button (4) to execute. The result of execution returns devices ranges founded (6) and load information for each device (7). Dino Explorer Suite User's Guide v

161 Update device configuration Dynamically query z / OS and update the space of each DASD volume already registered. The information in the new DASD volumes is not processed. Some tables used by DASD has two important fields: infodate - identifies the first date/time that information was found lastupdate - identifies the last date/time that information was verified Each execution of the process load configuration or update device configuration, creating new records (infodate = lastupdate) or update date/time from the field lastupdate. This process can be executed graphical interface (Dino Dataloader - Dasd), Command line (Dino util) or automated schedule (task manager windows). First you establish a connection with a mainframe (MVS Server) and click Connect button (1). After clicking the button connect information from the mainframe ion structure will be received (5). After that you set the others parameters: Parallel connections Filters and View volumes (query current configuration) Update only space Show loaded volumes Click Start button to execute (4). The result of execution returns load information for each device (7). Dino Explorer Suite User's Guide v

162 If you want, it is possible to save the parameters used in Load Configuration or Update device configuration for late use in graphical interface or command line - DinoUtil. To do this, simply click the File / Save As... option and type the name of the configuration that will be saved, as you can see on the screen below Dino Explorer Suite User's Guide v

163 Set current configuration Adjust the DASD infra-structure from this information. Some tables used by DASD has two important fields: infodate - identifies the first date/time that information was found lastupdate - identifies the last date/time that information was verified Each execution of the process load configuration or update device configuration, creating new records (infodate = lastupdate) or update date/time from the field lastupdate. This process can be executed graphical interface (Dino Dataloader - Dasd), Command line (Dino util) or automated schedule (task manager windows). You can adjust que current configuration selecting one date/time reference, one partition and click Update now button. This process, select the records that satisfied the criteria (lastupdate >= reference date) and populate the records into current config tables. Dino Explorer Suite User's Guide v

164 Configuration Database Connection The first time you use the Dino Explorer it will probably show a Message Box stating a database error and show the Database Connection dialog box: Normally the only two parameters users may fill are the name of the Dino database server and the name of the Dino Explorer database: dinodb However, you can enter all the relevant database connections: Server name Database name SQL Server Authentication Username Password Name and instance of the database. Normally SQLServer just requires the name of the server. SQL Express users may enter the name of the server\sqlexpress. Database name is the name of the database inside the SQLServer. Normally it is dinodb but it can have other names or even be renamed. The default is windows authentication. i.e. no user or password must be specified and the network authentication is sufficient. Select SQL Server Authentication if you have the users configured on the SQL Server. SQL Server user name SQL Server user password Dino Explorer Suite User's Guide v

165 Test the connection This button is very helpful to check if the parameters you configured are correct. After you change the database connection, the connection string is saved on the configuration file: DinoConfig.xml. This file is located on: Windows XP, Server 2003: C:\Documents and Settings\All Users\Application Data\Dino Explorer Windows 7, Vista and Server 2008: C:\ProgramData\Dino Explorer You can copy the configuration file to the Dino Explorer application folder. Product License The Dino Explorer is currently protected by a license xml file that is issue to your installation. You can load a new license file through the Product license panel on the Options menu: To load a new license, press load button and point to the license file received by the product dealer. Note: Do not modify the license file because it is XML signed document and will be invalidated. Dino Explorer Suite User's Guide v

166 MVS Servers To establish a connection between mainframe and Dino server, you need create a connection. To create a connection, you must give some parameters, described below and click Save button. After that you can test the connection clicking in Test connection button. Server name Host name IP Port Set the name of your connection You can use the mainframe ip address or mainframe name registered in DNS. Define the port number that will be used to connect to the mainframe, default is Test the connection This button is very helpful to check if the parameters you configured are correct. After you create a mainframe connection, the connection string is saved on the configuration file: DinoConfig.xml Dino Explorer Suite User's Guide v

167 Loader fields configuration When you execute the process load views or compress data, the data are grouped by mandatory and optional fields, the optional fields are listed below for each product and table. By default, all the optional fields are included in the group by fields when the product is installed. But the user can delete any optional group by field, for this simply unclick the field. After you finished, click in save button and the information is saved on the configuration file: DinoConfig.xml. Dino Explorer Suite User's Guide v

168 DinoMessaging Service Windows service responsible to download real-time events from the mainframe Remembering the process. Mainframe process The following diagram show started tasks (STC s) involved on the Dino Messaging: DXQUEUE DXSMF DXPLTCP DXPL STC DXQUEUE DXSMF DXPLTCP DXPL Function Save the records to be downloaded on memory. It uses 64bit-addressing to save the data. Intercepts the SMF records in real time (exit), process it and save the data on the DXQUEUE area. TCP server to respond to requests to Dino Server on the open platform. Subsystem interface responsible to initiate and terminate the DXPLTCP and hold all functions provided by the DXPL server The following table describes the products that require the Dino Messaging: Products AIM Application Impact Monitoring Dino Explorer with real-time updates The following diagram shows the basic mechanism of SMF records intercept, buffering and how they are downloaded: Dino Explorer Suite User's Guide v

169 1. The SMF exit gets the SMF record and find a free DXSMF buffer to copy the record contents; 2. One of the tasks of the DXSMF address space, build a message based on the SMF record copy and put this message on the DXQUEUE address space; 3. A remote process running on the Dino Server called Data Loader pulls the Dino TCP Server on the Mainframe (DXPLTCP) for a certain number of messages; 4. In response to the Dino Server request, the DXPLTCP gets the request messages from the DXQUEUE and send them to the Data Loader process. Server process There is one step necessary before you receive the events from mainframe: 1. Installing DinoMessaging service see manual Dino Explorer Suite Installation Guide chapter DinoMessaging Automation After this process was completed you can receive real-time events from the mainframe The communication process between mainframe and dino server is know how DinoMessaging Architecture. Dino Explorer Suite User's Guide v

170 DinoUtil CLI interface When you run DataLoader program some functions permits you can save the script specification for later user (view list below). Import data Swap data Compress data Purge records Load configuration Update device configuration The advantage of using DinoUtil program is to schedule the executions of these scripts saved in the most appropriate times. This program has two features: 1. Load Execute the tasks defined in script file saved 2. Import Import data records in DinoDB using files in CSV format. The advantage of using this interface is to schedule the executions of tasks in the most appropriate time. See figure below for DinoUtil syntax Dino Explorer Suite User's Guide v

171 Appendix Database Tables Importation Tables The importation tables are used to save the raw date loaded from the mainframe and they are called xxxdata. These tables are normally truncated after a Load View operation. Target Table name Product Description CICS Imported Data cicsdata Cics Transactions executions details such as CPU time, duration and bytes transmitted. CEC Activity Imported Data cecdata Cpu CSV records type CEC extracted from SMF type 70. Normally truncated every day after the load views. CPU Imported Data cpudata Cpu Details about jobs, programs and steps running on the mainframe. LPAR Activity Imported Data lpardata Cpu CSV records type LPR extracted from SMF type 70. Normally truncated every day after the load views. Cache Imported Data cachedata Dasd CSV records type CHE extracted from SMF type 74. Normally truncated every day after the load views. CHPID Imported Data chpdata Dasd CSV records type CHP extracted from SMF type 73. DEVICE Imported Data Normally truncated every day after the load views. devdata Dasd CSV records type DEV extracted from SMF type 74. Normally truncated every day after the load views. LCU Imported Data lcudata Dasd CSV records type LCU extracted from SMF type 78. Hyper-PAV Imported Data Volumes Imported Data Catalog Imported Data NVSAM Imported Data DS Inventory Imported Data Normally truncated every day after the load views. pavdata Dasd CSV records type LCU extracted from SMF type 78. Normally truncated every day after the load views. voldata Dasd Volume occupation imported data. Based on DCOLLECT "V" records catdata Dataset View dataset catalog operations: CATALOG, DELETE and RENAME. Based on SMF 61/65/66 records dsdata Dataset CSV records type 14 and 15 extracted from SMF type 14 e 15 (NVSAM close). Normally truncated every day after the load views. invdata Dataset View dataset (VSAM and non-vsam) DASD occupation imported records. Based on DCOLLECT "D" records SMS Imported Data smsdata Dataset View SMS dataset (VSAM and non-vsam) activity imported data. Based on SMF 42.6 records VSAM Imported Data vsdata Dataset CSV records type 64A from the SMF type 64 (VSAM close). Normally truncated every day after the load views. Dino Explorer Suite User's Guide v

172 dba Imported Data dbadata Db2 CSV records type DBA from SMF type 101. Normally truncated every day after the load views. IMS Imported Data imsdata Ims CSV records type IMS extracted from Log Archive exit or Batch utility. Normally truncated every day after the load views. IO Imported Data iodata Io CSV records type 00A extracted from SMF type 30(EXCP). ASC CPC Imported Data ASC LPAR Imported Data ASC WLM Imported Data Normally truncated every day after the load views. asccpcdata Msu CSV records type ACPC from SMF type 225(*). Normally truncated every day after the load views. asclpardata Msu CSV records type ALPR from SMF type 225(*). Normally truncated every day after the load views. ascwlmdata Msu CSV records type AWLM from SMF type 225(*). Normally truncated every day after the load views. (*) SMF type 225 Records generate from ASC (Automatic Soft Capping) tool. Dino Explorer Suite User's Guide v

173 Historical Tables The historical tables should be kept for long periods of time and represent the information compiled into historical records during the Load View process. Target Table name Product Description CICS History cicshist CICS CICS transaction history view consolidated into periodicity intervals on the load views. CPU LPAR History cpulpar CPU LPAR view records consolidated during the load views into periodicity intervals. CPU Job History cpujobs CPU Job executions records consolidated into periodicity intervals through the load views process. CPU Programs History cpuprograms CPU Programs view, consisting of all executions of specific programs (EXEC PGM) consolidated into periodicity intervals on the load views. CPU Users History cpuusers CPU Users view, consisting of all executions from a specific user consolidated into periodicity intervals on the load views. LPAR Activity History lparhist Cpu LPAR view records consolidated during the load views into periodicity intervals. CEC Activity History cechist Cpu CEC view records consolidated during the load views into periodicity intervals. Cache History cachehist Dasd Cache view records consolidated during the load views into periodicity intervals. Device History devhist Dasd Device view records consolidated during the load views into periodicity intervals. LCU History lcuhist Dasd LCU view records consolidated during the load views into periodicity intervals. Volume History volhist Dasd Volume view records consolidated during the load views into periodicity intervals. Catalog History cathist Dataset Catalog historical consolidated into periodicity intervals on the load views. NVSAM History dshist Dataset Non-VSAM historical view from jobs, programs, LPARs and users consolidated into periodicity intervals on the load views. Inventory History invhist Dataset View historical records from dataset (VSAM and non- VSAM) DASD occupation. SMS History smshist Dataset View historical records from SMS dataset (VSAM and non-vsam) activity. VSAM History vshist Dataset VSAM historical view from jobs, LPARs and users consolidated into periodicity intervals on the load views. DB2 History dbahist Db2 DB2 transaction history view consolidated into periodicity intervals on the load views. Dino Explorer Suite User's Guide v

174 IO History iohist Io Historical view of all device access from jobs, programs, LPARs and users consolidated into periodicity intervals on the load views. Dino Explorer Suite User's Guide v

175 Execution View Tables The execution view tables represent events on the mainframe such as a job execution or the details of the job steps. These tables are very reach in information however they cannot be kept forever. Target Table name Product Description CICS Transactions cicstransactions CICS Transactions executions details such as CPU time, duration and bytes transmitted. CPU Job Executions cpujobexecs CPU Job execution details. Should be kept for short periods. CPU Job Steps cpujobsteps CPU Step of job executions details. Should be kept for short periods. CPU Services History cpuservices CPU Services records details such all services units, counter times. EXEC Imported Data execdata Smart Details about jobs, programs and steps running on the mainframe. (Clone Table Cpudata) AIM User Input AimUserInput Smart User Input event table SYSLOG Imported Data syslogdata Smart SYSLOG messages Work Tables Target Table name Product Description CEC Work table CecDataUnique CPU LPAR Work tables LparDataUnique CPU EXEC Job IDs execjobids Smart Work table EXEC Job Start ExecJobStart Smart Work table Dino Explorer Suite User's Guide v

176 DinoCmd Query CLI interface When you execute queries in any Dino Suite product, is possible to save that query for re-use in future. See an example below using Dino Cpu Explorer to save a query as report. If you click any column or any row will appear an option to save this query as a report, it is very easy. The CLI application Dinocmd permits the user re-execute previous report saved, passing the arguments in command line (see syntax below) and direct the output to screen (default) or to a file (redirect or appending) the information generate for report can be used how you wish. Dino Explorer Suite User's Guide v

177 Portal Customization Introduction The objective of this manual is to provide an easy-to-follow, step-by-step, comprehensive guide to assist you in develop customized reports using Dino Explorer API's to execute queries on Dino Explorer Database and present it in charts and grids. Dino Explorer Suite delivers a set of applications that makes queries in Dino Database and shows it in grids pre-formatted. Further than this, 4bears offers some API's that allows users to execute queries in Dino Database and personalize the way the resultant dataset should be presented. Architecture Besides Dino Database (dinodb) there is a complementary database called portaldb. This database contains a table called cachedqueries that we use to cache queries which can take long time to execute in dinodb and retrieves it quickly. Further, you can create your own tables on this database. Basically, what you have to do is: 1. Get user parameters from screen form; 2. Check if the query is already cached in portaldb; 3. If not, execute the query in dinodb and cache it on portaldb; 4. Present the resultant dataset; Shortly, this is the schema: Dino Explorer Suite User's Guide v

178 Visual Studio Some pre-developed portals are distributed with Dino Explorer. Those portals are developed in ASP.NET environment, using C# language. We recommend you to use those portals as a startup point to develop your own portals. Thus, using Microsoft Visual Studio you could take advantage to have the same environment as they were developed. There is a free version called Visual Studio Express (for web), available to download on Microsoft s site. You can use it to take a first contact, if you aren t a regular user yet. Components Basically, what you have to concern is about two DLLs: DxQueries.dll and DxPortal.dll. Those DLLs can be found on Dino Explorer folder installation. To referrer those DLLs, you can find those DLL by browsing it when adding a reference on Solution Explorer. Here you are the way: In Visual Studio -> Solution Explorer, right click on Website and choose Add Reference So, browse to Dino Explorer installation folder and select Queries.dll and DxPortal.dll. Some DLLs which DxQueries.dll have dependency, will be also referred. After you add those references, your BIN folder have to have at least It: Those additional DLLs are classes that serves queries from each Dino Explorer Product, i.e., AIM, CICS, CPU, DASD, Dataset and I/O. DxCore.dll is a core class to serve basic functions; Dino Explorer Suite User's Guide v

179 Getting started Here, we are going to put hands on. We will show it piece of code you will have to concern until we got some result. Presuming you are using Visual Studio, start from: Create a new web site Choose an Empty Web Site Be sure a.net Framework 4.5 is selected. Add reference to Dino API As we see on previous explanation about how to set reference to Dino API. Add a Global.asax file Add a Global.asax and insert Configuration.Initialize() statement in Application_Start method. This will initialize the Dino Environment (queries, fields, groupings, etc.). The Global.asax file, also known as the ASP.NET application file, is an optional file that contains code for responding to application-level events raised by ASP.NET. Application_Start() method executes once when the web site is started. Dino Explorer Suite User's Guide v

180 void Application_Start(object sender, EventArgs e) { Dino.Configuration.Initialize(); } There is some other different signatures for this method, where you can pass the product, database key and others. Execute that Dino.Configuration.Initialize() method is mandatory, in order to initialize list of fields, filter fields, groping fields, filters, groups and more. Add a Default.aspx page Right click on web site (Solution Explorer) and select Add -> Web Form. By default the created web form is called Default.aspx. Right click on this page and select Set As Start Page. At this point try to execute the project by clicking Run on pressing F5 in Visual Studio. You will receive the message: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (Provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) That means, there s not the connection to database. But the file Dino.Config was created in the website folder. Once Visual Studio creates Dino.Config file but it does not appear, right click on web site and click Refresh Folder. Edit this file and put server name, database name, user and password. It should have this appearance: <?xml version="1.0" encoding="utf-8"?> <configuration> <connectionstrings> <add name="dino" connectionstring="server=localhost;database=dinodb;user ID=sa;Password=; IntegratedSecurity=true;" providername="system.data.sqlclient" /> </connectionstrings> </configuration> In order to use portaldb database to cache queries, you have to add the configuration line on Dino.Config. It s going to be like this: Dino Explorer Suite User's Guide v

181 <?xml version="1.0" encoding="utf-8"?> <configuration> <connectionstrings> <add name="dino" connectionstring="server=localhost;database=dinodb;user ID=sa;Password=; IntegratedSecurity=true;" providername="system.data.sqlclient" /> <add name="portaldb" connectionstring="server=localhost;database=dinodb;user ID=sa;Password=; IntegratedSecurity=true;" providername="system.data.sqlclient" /> </connectionstrings> </configuration> Further on, we will discuss more about PortalDB. Execute the project again and this time we will receive a blank page. This means we are in right way. DinoQuery class (Consuming Dino Database Information) DinoQuery class is the main component Dino API delivers in the way to consume Dino Database information. Syntax: Dino.Queries.DinoQuery query = new Dino.Queries.DinoQuery(); The DinoQuery class exposes the following members. Constructors: Name DinoQuery() DinoQuery(QueryName) DinoQuery(QueryItem) DinoQuery(Product, QueryName) DinoQuery(Product, QueryName, dbkey) Description Initializes a new instance of DinoQuery class. Initializes a new instance of DinoQuery class to a specified name to QueryName. Initializes a new instance of DinoQuery class to a specified name to QueryItem. Initializes a new instance of DinoQuery class to a specified name to Product and QueryItem. Initializes a new instance of DinoQuery class to a specified name to Product, QueryItem and dbkey. Dino Explorer Suite User's Guide v

182 Properties: Name Fields Product UserFilter UserGroup StartTime EndTime Periodicity ScopeEvent FilterApplied Sql dbkey Description Gets a dictionary with native configured fields. Gets or sets the definition of product (a module of Dino Suite). Gets or sets a definition of an UserFilter, which is a condition to be applied on the clause WHERE of the query. Gets or sets an UserGroup, where one or more fields will be part of an aggregate feature of the query. Gets or set a DateTime type that restricts the start time in the query. Gets or set a DateTime type that restricts the end time in the query. Ges or sets a Periodicite, that defines the slice time where de data will be grouped. Gets or set a Scope Event, wich definces the kind of visions to the timeline. Gets a string with a filter applied on the query. Gets a string with the query applied. Gets or sets a key to a connection configuration existing on Dino.Config file. By default, this value is Dino Methods: Name LoadFilter(string) LoadGrouping(string) Execute(string) Execute(string,string) ValidateDataset(DataSe t) Description Load a filter named by the string from the dino database. Load a grouping definition from the dino database. Execute the selected query and return a dataset. String is a parameter defined on Dino.Queries.QueryName Enumerator at appendix 1. Execute a query with two parameters. Returns true if the dataset is valid. Dino Explorer Suite User's Guide v

183 Remarks: Let s put a Gridview component to receive the data we are going to get from Dino database. In the Default.aspx page, drag-and-drop a GridView component from Toolbox. <asp:gridview ID="GridView1" runat="server"></asp:gridview> Move to Default.aspx.cs (code part of the page) and put next few code lines in C# on the Page_Load method: protected void Page_Load(object sender, EventArgs e) { Dino.Queries.DinoQuery query = new Dino.Queries.DinoQuery(Dino.Product.Cpu, Dino.Queries.QueryName.CpuLparSummary); DataSet ds = query.execute(""); } GridView1.DataSource = ds; GridView1.DataBind(); Execute the project again and Voilà: we have a grid populated with the Dino Database. When declaring a Dino.DinoQuery, the Enum type Product defines which product of Dino Suite we want to work with in term of fields, queries, filters, groups, etc. Defining Product as Any all definitions will be loaded Dino.Product Enumeration: Member name Any Cpu Io Dasd DataSet Aim Cics Ims Asc Description Any product or not specified CPU Explorer I/O Explorer DASD Explorer Dataset Explorer Application Impact Monitoring CICS Explorer IMS Explorer zcost ASC See a complete enumeration of Dino.Queries.QueryName in appendix 1. Dino Explorer Suite User's Guide v

184 Additional Features In order to obtain queries more detailed, as we can see in Dino Explorer Suite screens, there s some parameters we can introduce in DinoQuery class. Those parameters are: 1. Filter 2. Grouping 3. Periodicity 4. Scope event 5. Start and End times Filters Dino Explorer filters can be Simple filters, Custom filters or Multi filters. It s possible to create dynamic filters too. To get a filter directly from query, just use this directive: query.loadfilter(filtername) Join many filters in a multifilter, like this: DinoMultiFilter mf = new DinoMultiFilter(); foreach (string filter in listfilters) { mf.add(query.loadfilter(filter)); } query.filter = mf; Create a dynamic filter, as next: DinoSimpleFilter filter = new DinoSimpleFilter(); filter.addvalue("exectype", "JES2"); query.filter = filter; Grouping Set a group to grouping the output results from a query. DinoSimpleGroup group = new DinoSimpleGroup(); group.add("storagegroup"); query.grouping = group; Dino Explorer Suite User's Guide v

185 Periodicity Set the periodicity to be grouped and shown. Periodicity Enumeration are: Periodicity Description LoadedData All records will be returned without periodicity grouping Hourly Group records by hour Daily Weekly Montly Yearly HourOfDay WeekDay DayOfMonth AnyTime Minute Group records by day Group records by week Group records by month Group records by year Group records by absolute hour. Example: every hour 14 from any day Group records by week day. Example: Every Mondays from any week Group records by month day. Example: Every day 25 from any month Group records by any interval Group records by intervals in minutes query.periodicity = Dino.Periodicity.LoadedData; Scope event The following table describes the Scope event Enumeration: Member name On the period Finished at Intervals Description Filters Jobs / Job steps event records based both on start and end time. Filters Jobs / Job steps event records based only on end time. It is the default option. Filters intermediary execution data based on SFM interval records. query.scopeevent = Dino.ScopeEvent.Intervals; Start and End times You can set start time, end time, both of them or none of them. In this example below, start and end times are retrieved from the Form. query.starttime = DateTime.Parse(txtStartTime.Text); query.endtime = DateTime.Parse(txtEndTime.Text); Dino Explorer Suite User's Guide v

186 Execute method Finally, use the Execute method of the class DinoQuery to get a dataset properly populated. The Execute method accepts one or two parameters. See what parameter must be passed in the enumeration at appendix 1. See below, a almost complete method that returns a filled dataset: private DataSet ExecuteQuery(string filtername, DateTime start, DateTime end) { DinoQuery query = new DinoQuery(Dino.Product.Cpu, QueryName.JobsTotalSummary); DinoMultiFilter mf = new DinoMultiFilter(); query.filter = UserFilter.GetFilter(filterName, Product.Cpu, ""); DinoSimpleGroup group = new DinoSimpleGroup(); group.add("sid"); query.grouping = group; query.periodicity = Dino.Periodicity.Monthly; query.scopeevent = Dino.ScopeEvent.FinishedAt; query.starttime = start; query.endtime = end; DataSet ds = query.execute(""); } return ds; Dino Explorer Suite User's Guide v

187 Using DxPortal and portaldb PortalDB is a database provided for use by clients with the customization purpose. There is already a table named cachedqueries where, aided by DxPortal API we can write datasets and retrieves them more quickly than a query in Dino Database. The goal here is to cache known queries to retrieve it more quickly next time. On this database, you can create your own tables. DxPortal has basically two classes: Format: that have two methods: o Void AddTotalTable(DataSet ds): that adds a table with the totals of a given table; o Void FormatGrid(Gridview grid, DataSet ds): that formats properly the output grid. CacheDataSet: that have those methods: o Void Add(string name, DataSet ds): that adds the given dataset to cachedqueries table; o DataSet Get(string name): that retrieves the cached dataset; o DateTime GetDateCached(string name): Returns date and time when given dataset was cached. Format class Format class offers a set of methods that formats grids by setting fields formats to appropriate type. There s a facility to add a Total table to a given dataset also. Methods: Name FormatGrid(GridView, DataSet) FormatGrid(GridView, DataSet, Dictionary<string, FieldItem>) FormatGrid(GridView, DataView, DataRow) FormatGrid(GridView gv, DataView dv, DataRow tot, Dictionary<string, FieldItem> fields) AddTotalTable(DataSet) Description Formats a GridView by setting the fields in DataSet according the default Dino Fields. Formats a GridView by setting the fields in DataSet according the given dictionary of Fields. Formats a GridView by setting the fields in DataView according the default Dino Fields. Also formats a Footer line as the given DataRow. Formats a GridView by setting the fields in DataView according the given dictionary of fields. Also formats a Footer line as the given DataRow. Add a table to the DataSet with a single row where the fields are totalized. Dino Explorer Suite User's Guide v

188 Remarks: Let s say you created an in memory dataset and wants to add a second table with the totals. Use that sequence: Format.AddTotalTable(ds); Another facility of Format class is to FormatGrid, i.e., format properly each field of given dataset in the way the output grid shows fields correctly. GridView1.DataSource = ds; Dino.Portal.Format.FormatGrid(GridView1, ds); GridView1.DataBind(); Dino Explorer Suite User's Guide v

189 CacheDataSet class CacheDataSet is a useful class which we can use to store and retrieve datasets on the PortalDB database. It s part of dxportal API. i.e., it s necessary to reference dxportal and add a directive using Dino.Portal in the beginning of the code. Syntax: CacheDataSet cache = new CacheDataSet( MyPortal ); The CacheDataSet type exposes the following members. Constructors: Name CacheDataSet(string portalname) Description Initializes a new instance of CacheDataSet class to a specified name to portal. Properties: Name dbkey portalname Description Gets or sets a key to a connection configuration existing on Dino.Config file. By default, this value is PortalDB Gets or sets a name to the portal. Methods: Name Description Add(string, DataSet) Writes the given DataSet named by the string Get(string) Retrieves a dataset named by the string GetDataCached(string) Obtains the date and time when that dataset was cached Dino Explorer Suite User's Guide v

190 Remarks: Caching a dataset: We recommend you create a name using the unique name of the DinoQuery concatenated with the parameters. Use this syntax: string cachename = "cpulpar" + filter + group + query.starttime.toshortdatestring(); CacheDataSet cache = new CacheDataSet ("myportal"); cache.add(cachename, ds); Retrieving a dataset: In the same way, uses that unique name to retrieve a dataset; string cachename = "cpulpar" + filter + group + query.starttime.toshortdatestring(); CacheDataSet cache = new CacheDataSet ("myportal"); cache.get(cachename); Retrieving time cached: Additionally, you can obtain the date and time when that dataset was cached; DateTime cachedtime = cache.getdatecached(queryname); Dino Explorer Suite User's Guide v

191 Building a chart ASP.NET Framework 4.0 delivers a graphic component that allow us to create charts quickly and easy. In the XAML part of the page (Default.aspx), drag and drop a Chart component to the page. It s located under the tab Data of Visual Studio Toolbox. As the Chart componente is moved, its XAML will appear like this: <asp:chart ID="Chart1" runat="server"> <Series> <asp:series Name="Series1"></asp:Series> </Series> <ChartAreas> <asp:chartarea Name="ChartArea1"></asp:ChartArea> </ChartAreas> </asp:chart> This is the more simple definition of the chart. A lot more can be specified by clicking on it and working on properties page, such as titles, legends, axes, etc. By default, the ImageStorageMode property is set to UseHttpHandler. This causes the Chart control to use the ChartHttpHandler that is registered in the Web.config file to manage the rendered chart images. To manage rendered chart images manually, set the ImageStorageMode property to UseImageLocation, and then set the ImageLocation property to an absolute or relative path, as seen below. <asp:chart ID="Chart1" runat="server" ImageStorageMode="UseImageLocation" ImageLocation="TempFiles\ChartPic_#SEQ(300,38)"> <Series> <asp:series Name="Series1"></asp:Series> </Series> <ChartAreas> <asp:chartarea Name="ChartArea1"></asp:ChartArea> </ChartAreas> </asp:chart> Dino Explorer Suite User's Guide v

zcost Management Dino Explorer Suite User s Guide

zcost Management Dino Explorer Suite User s Guide Dino Explorer Suite User s Guide Dino Explorer Suite Document Number: DXP-USG-625-01-E Revision Date: January 15, 2018 zcost Management 2006 2018 All Rights Reserved All trade names referenced are trademarks

More information

Dino Explorer. MVS Data Collector for Mainframe Assessment

Dino Explorer. MVS Data Collector for Mainframe Assessment Dino Explorer MVS Data Collector for Mainframe Assessment Index of contents 1 - Introduction...3 2 - The Installation Guide...4 2.1 Step 1 Copy the dxplload.xmit to MVS...4 2.2 Step 2 Receive XMIT file...6

More information

for Mainstar MXI G2 Session 8962 Speaker: Shari Killion

for Mainstar MXI G2 Session 8962 Speaker: Shari Killion The New Storage Manager Plug-in for Mainstar MXI G2 Session 8962 Speaker: Shari Killion About Mainstar MXI G2 Provides ready access to critical information about your z/os system Offers a fast and easy-to-use

More information

Uni Hamburg Mainframe Summit 2010 z/os The Mainframe Operating. Part 6 z/os Concepts

Uni Hamburg Mainframe Summit 2010 z/os The Mainframe Operating. Part 6 z/os Concepts Uni Hamburg Mainframe Summit 2010 z/os The Mainframe Operating Part 6 z/os Concepts Redelf Janßen IBM Technical Sales Mainframe Systems Redelf.Janssen@de.ibm.com Course materials may not be reproduced

More information

IBM Tivoli OMEGAMON XE for Storage on z/os Version Tuning Guide SC

IBM Tivoli OMEGAMON XE for Storage on z/os Version Tuning Guide SC IBM Tivoli OMEGAMON XE for Storage on z/os Version 5.1.0 Tuning Guide SC27-4380-00 IBM Tivoli OMEGAMON XE for Storage on z/os Version 5.1.0 Tuning Guide SC27-4380-00 Note Before using this information

More information

IBM. DFSMS Implementing System-Managed Storage. z/os. Version 2 Release 3 SC

IBM. DFSMS Implementing System-Managed Storage. z/os. Version 2 Release 3 SC z/os IBM DFSMS Implementing System-Managed Storage Version 2 Release 3 SC23-6849-30 Note Before using this information and the product it supports, read the information in Notices on page 267. This edition

More information

z/os Introduction and Workshop Data Sets

z/os Introduction and Workshop Data Sets z/os Introduction and Workshop Data Sets 2009 IBM Corporation Unit Objectives After completing this unit, you should be able to: Describe data set naming rules Describe a partitioned data set Describe

More information

IBM System z Fast Track

IBM System z Fast Track IBM System z Fast Track Duration: 10 Days Course Code: ESZ0G Overview: This 10 day course is intended to give IT professionals a well rounded introduction to the System z environment, current servers,

More information

What Every Storage Administrator Should Do For YOUR DB2 Environment

What Every Storage Administrator Should Do For YOUR DB2 Environment What Every Storage Administrator Should Do For YOUR DB2 Environment John Iczkovits IBM March 15, 2012 Session Number 10475 10475: What Every Storage Administrator Should Do For YOUR DB2 Environment Storage

More information

17557 Beyond Analytics: Availability Intelligence from RMF and SMF with z/os Storage and Systems Examples

17557 Beyond Analytics: Availability Intelligence from RMF and SMF with z/os Storage and Systems Examples 17557 Beyond Analytics: Availability Intelligence from RMF and SMF with z/os Storage and Systems Examples Insert Custom Session QR if Desired Brent Phillips brent.phillips@intellimagic.com Overview 1.

More information

1) How many unique operating systems are available on IBM Z hardware? Answer Choice A58_

1) How many unique operating systems are available on IBM Z hardware? Answer Choice A58_ Print Name: Print Email Address: 60 questions where each question has only 1 best choice answer from the list of 60 answers A1 to A60 1) How many unique operating systems are available on IBM Z hardware?

More information

Improving VSAM Application Performance with IAM

Improving VSAM Application Performance with IAM Improving VSAM Application Performance with IAM Richard Morse Innovation Data Processing August 16, 2004 Session 8422 This session presents at the technical concept level, how IAM improves the performance

More information

Sysplex: Key Coupling Facility Measurements Cache Structures. Contact, Copyright, and Trademark Notices

Sysplex: Key Coupling Facility Measurements Cache Structures. Contact, Copyright, and Trademark Notices Sysplex: Key Coupling Facility Measurements Structures Peter Enrico Peter.Enrico@EPStrategies.com 813-435-2297 Enterprise Performance Strategies, Inc (z/os Performance Education and Managed Service Providers)

More information

Data Migration and Disaster Recovery: At Odds No More

Data Migration and Disaster Recovery: At Odds No More Data Migration and Disaster Recovery: At Odds No More Brett Quinn Don Pease EMC Corporation Session 8036 August 5, 2010 1 Mainframe Migrations Challenges Disruptive To applications To Disaster Recovery

More information

EMC ControlCenter PLANNING AND INSTALLATION GUIDE VOLUME 2 (MVS AGENTS) 6.0 P/N REV A02

EMC ControlCenter PLANNING AND INSTALLATION GUIDE VOLUME 2 (MVS AGENTS) 6.0 P/N REV A02 EMC ControlCenter 6.0 PLANNING AND INSTALLATION GUIDE VOLUME 2 (MVS AGENTS) P/N 300-004-024 REV A02 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright

More information

Installation Dino Explorer Suite Guide Installation Guide. Version 6.2.0

Installation Dino Explorer Suite Guide Installation Guide. Version 6.2.0 Dino Dino Explorer Explorer Suite Installation Dino Explorer Suite Guide Installation Guide Version 6.2.0 Contents 1 Introduction... 3 2 Installing the Dino Server... 4 3 Installing Dino Client... 6 4

More information

Version 9 Release 1. IBM InfoSphere Guardium S-TAP for IMS on z/os V9.1 User's Guide IBM

Version 9 Release 1. IBM InfoSphere Guardium S-TAP for IMS on z/os V9.1 User's Guide IBM Version 9 Release 1 IBM InfoSphere Guardium S-TAP for IMS on z/os V9.1 User's Guide IBM ii IBM InfoSphere Guardium S-TAP for IMS on z/os V9.1 User's Guide Contents Chapter 1. What does IBM InfoSphere Guardium

More information

Dino Explorer Suite. MVS Installation and Operations Guide. Version 6.0.2

Dino Explorer Suite. MVS Installation and Operations Guide. Version 6.0.2 Dino Explorer Suite MVS Installation and Operations Guide Version 6.0.2 Index of contents 1 - Introduction... 4 System Requirements... 4 2 - The Installation Guide... 5 Summary... 5 SMF collector... 5

More information

The SMF recovery analysis report (SRSSMF) formats SMF records produced by SRS and provides totals for the successful and unsuccessful recoveries

The SMF recovery analysis report (SRSSMF) formats SMF records produced by SRS and provides totals for the successful and unsuccessful recoveries 1 2 The SMF recovery analysis report (SRSSMF) formats SMF records produced by SRS and provides totals for the successful and unsuccessful recoveries performed by SRS. Oops, the wrong SRS! OK, the Real

More information

zcost Management Dino Explorer Suite MVS Installation and Operations Guide

zcost Management Dino Explorer Suite MVS Installation and Operations Guide Dino Explorer Suite MVS Installation and Operations Guide Dino Explorer Suite Document Number: DXP-MVS-625-01-E Revision Date: January 15, 2018 zcost Management 2006 2018 All Rights Reserved All trade

More information

The Major CPU Exceptions in EPV Part 2

The Major CPU Exceptions in EPV Part 2 The Major CPU Exceptions in EPV Part 2 Mark Cohen Austrowiek EPV Technologies April 2014 6 System capture ratio The system capture ratio is an inverted measure of the internal system overhead. So the higher

More information

Uni Hamburg Mainframe Summit 2010 z/os The Mainframe Operating. Part 4 z/os Overview

Uni Hamburg Mainframe Summit 2010 z/os The Mainframe Operating. Part 4 z/os Overview Uni Hamburg Mainframe Summit 2010 z/os The Mainframe Operating Part 4 z/os Overview Redelf Janßen IBM Technical Sales Mainframe Systems Redelf.Janssen@de.ibm.com Course materials may not be reproduced

More information

OMEGAMON XE for Storage News and Tips

OMEGAMON XE for Storage News and Tips OMEGAMON XE for Storage News and Tips Deborah Reynolds IBM Corporation debrey@us.ibm.com March 03, 2015 Session 17011 Insert Custom Session QR if Desired. Agenda Recent Enhancements (V5.3.0) Packaging

More information

Further Improve VSAM Application Performance

Further Improve VSAM Application Performance IAM V8.1 Enhancements Further Improve VSAM Application Performance Richard Morse Innovation Data Processing A g st 14 2006 August 14, 2006 Session 3047 IAM V8.1 Overview What is IAM? Unique Features of

More information

Version Monitoring Agent User s Guide SC

Version Monitoring Agent User s Guide SC Tivoli IBM Tivoli Advanced Catalog Management for z/os Version 02.01.00 Monitoring Agent User s Guide SC23-7974-00 Tivoli IBM Tivoli Advanced Catalog Management for z/os Version 02.01.00 Monitoring Agent

More information

EView/390z Insight for Splunk v7.1

EView/390z Insight for Splunk v7.1 EView/390z Insight for Splunk v7.1 EView/390z Insight Overview (IBM Mainframe environment) Technical Details By leveraging the foundation EView Intelligent Agent technology to power EView/390z Insight

More information

INTRODUCTION. José Luis Calva 1. José Luis Calva Martínez

INTRODUCTION. José Luis Calva 1. José Luis Calva Martínez USING DATA SETS José Luis Calva Martínez Email: jose.luis.calva@rav.com.mx rav.jlcm@prodigy.net.mx INTRODUCTION In working with the z/os operating system, you must understand data sets, the files that

More information

IBM. DFSMS Introduction. z/os. Version 2 Release 3 SC

IBM. DFSMS Introduction. z/os. Version 2 Release 3 SC z/os IBM DFSMS Introduction Version 2 Release 3 SC23-6851-30 Note Before using this information and the product it supports, read the information in Notices on page 91. This edition applies to Version

More information

The Modern Mainframe. IBM Systems. Powerful, secure, dependable and easier to use. Bernice Casey System z User Experience

The Modern Mainframe. IBM Systems. Powerful, secure, dependable and easier to use. Bernice Casey System z User Experience Powerful, secure, dependable and easier to use Bernice Casey (casey@us.ibm.com) System z User Experience Steven Ma (stevenma@us.ibm.com) Application Integration Middleware User Experience 2006 IBM Corporation

More information

A. Specify NUMTCB=10 and allow 1 WLM managed stored procedure address space per sysplex for AE1.

A. Specify NUMTCB=10 and allow 1 WLM managed stored procedure address space per sysplex for AE1. Volume A~B: 103 Questions Volume A Question No : 1 An external stored procedure, assigned to application environment AE1, should run in parallel to a maximum of 10 concurrent procedures. Which action will

More information

Vsam File Status Code 93

Vsam File Status Code 93 Vsam File Status Code 93 File Status Keys, Return Codes for Data A quick reference of the VSAM and QSAM File Status or Return Codes for an IBM mainframe or Micro Focus. Records 426-495. CICS/ESA VSAM File

More information

Your Changing z/os Performance Management World: New Workloads, New Skills

Your Changing z/os Performance Management World: New Workloads, New Skills Glenn Anderson, IBM Lab Services and Training Your Changing z/os Performance Management World: New Workloads, New Skills Summer SHARE August 2015 Session 17642 Agenda The new world of RMF monitoring RMF

More information

IBM InfoSphere Guardium S-TAP for Data Sets on z/os User's Guide. Version9Release1

IBM InfoSphere Guardium S-TAP for Data Sets on z/os User's Guide. Version9Release1 IBM InfoSphere Guardium S-TAP for Data Sets on z/os User's Guide Version9Release1 ii IBM InfoSphere Guardium S-TAP for Data Sets on z/os User's Guide Contents Chapter 1. IBM InfoSphere Guardium S-TAP for

More information

CA Allocate DASD Space and Placement CA RS 1610 Service List

CA Allocate DASD Space and Placement CA RS 1610 Service List CA Allocate DASD Space and Placement 12.5 1 CA RS 1610 Service List Description Type 12.5 RO90756 POSSIBLE CATALOG HANG VSAM EXTEND AFTER RO77668 APPLIED ** PRP ** RO91005 V37SMST DUMP >25 VOLUMES >1 DD

More information

IBM. Documentation. IBM Sterling Connect:Direct Process Language. Version 5.3

IBM. Documentation. IBM Sterling Connect:Direct Process Language. Version 5.3 IBM Sterling Connect:Direct Process Language IBM Documentation Version 5.3 IBM Sterling Connect:Direct Process Language IBM Documentation Version 5.3 This edition applies to Version 5 Release 3 of IBM

More information

CA Chorus for Storage Management

CA Chorus for Storage Management CA Chorus for Storage Management User Guide Version 03.0.00, Second Edition This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred to as

More information

IBM System z Fast Track

IBM System z Fast Track IBM System z Fast Track Duración: 1 Días Código del Curso: ESZ0G Método de Impartición: Curso Remoto (Virtual) Temario: This 10 day course is intended to give IT professionals a well rounded introduction

More information

Introduction to Performing a z/os DASD I/O Subsystem Performance Health Check

Introduction to Performing a z/os DASD I/O Subsystem Performance Health Check Introduction to Performing a z/os DASD I/O Subsystem Performance Health Check Instructor: Peter Enrico Email: Peter.Enrico@EPStrategies.com Instructor: Tom Beretvas Email: beretvas@gmail.com z/os Performance

More information

Sub-capacity pricing for select IBM zseries IBM Program License Agreement programs helps improve flexibility and price/performance

Sub-capacity pricing for select IBM zseries IBM Program License Agreement programs helps improve flexibility and price/performance Marketing Announcement August 10, 2004 Sub-capacity pricing for select IBM zseries IBM License Agreement programs helps improve flexibility and price/performance Overview IBM extends sub-capacity charging

More information

IBM InfoSphere Classic Federation for z/os Version 11 Release 1. Installation Guide GC

IBM InfoSphere Classic Federation for z/os Version 11 Release 1. Installation Guide GC IBM InfoSphere Classic Federation for z/os Version 11 Release 1 Installation Guide GC19-4169-00 IBM InfoSphere Classic Federation for z/os Version 11 Release 1 Installation Guide GC19-4169-00 Note Before

More information

APIs Economy for Mainframe Customers: A new approach for modernizing and reusing mainframe assets

APIs Economy for Mainframe Customers: A new approach for modernizing and reusing mainframe assets Contact us: ZIO@hcl.com APIs Economy for Mainframe Customers: A new approach for modernizing and reusing mainframe assets www.zio-community.com Meet Our Experts and Learn the Latest News Copyright 2018

More information

EView/390 Management for HP OpenView Operations Unix

EView/390 Management for HP OpenView Operations Unix EView/390 Management for HP OpenView Operations Unix Concepts Guide Software Version: A.06.00 June 2007 Copyright 2007 EView Technology, Inc. EView Technology makes no warranty of any kind with regard

More information

z/os CSI International 8120 State Route 138 Williamsport, OH

z/os CSI International 8120 State Route 138 Williamsport, OH z/os Software Solutions CSI International 8120 State Route 138 Williamsport, OH 43164-9767 http://www.csi-international.com (800) 795-4914 - USA (740) 420-5400 - Main Operator (740) 333-7335 - Facsimile

More information

IBM. DFSMS Using the Interactive Storage Management Facility. z/os. Version 2 Release 3 SC

IBM. DFSMS Using the Interactive Storage Management Facility. z/os. Version 2 Release 3 SC z/os IBM DFSMS Using the Interactive Storage Management Facility Version 2 Release 3 SC23-656-30 Note Before using this information and the product it supports, read the information in Notices on page

More information

A Field Guide for Test Data Management

A Field Guide for Test Data Management A Field Guide for Test Data Management Kai Stroh, UBS Hainer GmbH Typical scenarios Common situation Often based on Unload/Load Separate tools required for DDL generation Hundreds of jobs Data is taken

More information

Simple And Reliable End-To-End DR Testing With Virtual Tape

Simple And Reliable End-To-End DR Testing With Virtual Tape Simple And Reliable End-To-End DR Testing With Virtual Tape Jim Stout EMC Corporation August 9, 2012 Session Number 11769 Agenda Why Tape For Disaster Recovery The Evolution Of Disaster Recovery Testing

More information

Historical Collection Best Practices. Version 2.0

Historical Collection Best Practices. Version 2.0 Historical Collection Best Practices Version 2.0 Ben Stern, Best Practices and Client Success Architect for Virtualization and Cloud bstern@us.ibm.com Copyright International Business Machines Corporation

More information

Uni Hamburg Mainframe Summit z/os The Mainframe Operating. Part 3 z/os data sets. Introduction to the new mainframe. Chapter 5: Working with data sets

Uni Hamburg Mainframe Summit z/os The Mainframe Operating. Part 3 z/os data sets. Introduction to the new mainframe. Chapter 5: Working with data sets Uni Hamburg Mainframe Summit z/os The Mainframe Operating Chapter 5: Working with data sets Part 3 z/os data sets Michael Großmann IBM Technical Sales Mainframe Systems grossman@de.ibm.com Copyright IBM

More information

Data Express 4.0. Data Subset Extraction

Data Express 4.0. Data Subset Extraction Data Express 4.0 Data Subset Extraction Micro Focus The Lawn 22-30 Old Bath Road Newbury, Berkshire RG14 1QN UK http://www.microfocus.com Copyright Micro Focus 2009-2014. All rights reserved. MICRO FOCUS,

More information

IBM Tivoli OMEGAMON XE on z/os

IBM Tivoli OMEGAMON XE on z/os Manage and monitor your z/os and OS/390 systems IBM Highlights Proactively manage performance and availability of IBM z/os and IBM OS/390 systems from a single, integrated interface Maximize availability

More information

Workload Characterization Algorithms for DASD Storage Subsystems 1

Workload Characterization Algorithms for DASD Storage Subsystems 1 Workload Characterization Algorithms for DASD Storage Subsystems 1 Dr. H. Pat Artis Performance Associates, Inc. 72-687 Spyglass Lane Palm Desert, CA 92260 (760) 346-0310 drpat@perfassoc.com Abstract:

More information

INNOVATION TECHSUPPORT

INNOVATION TECHSUPPORT INNOVATION TECHSUPPORT VOLUME 3.1 Welcome to the third issue of INNOVATION TECH SUPPORT. TECHSUPPORT is intended as INNOVATION s communication vehicle to those responsible for the use of INNOVATION s products.

More information

Chapter 2 TSO COMMANDS. SYS-ED/ Computer Education Techniques, Inc.

Chapter 2 TSO COMMANDS. SYS-ED/ Computer Education Techniques, Inc. Chapter 2 TSO COMMANDS SYS-ED/ Computer Education Techniques, Inc. Objectives You will learn: Executing TSO commands in READY mode or ISPF. The format of a TSO command - syntax and usage. Allocating a

More information

Unum s Mainframe Transformation Program

Unum s Mainframe Transformation Program Unum s Mainframe Transformation Program Ronald Tustin Unum Group rtustin@unum.com Tuesday August 13, 2013 Session Number 14026 Unum Unum is a Fortune 500 company and one of the world s leading employee

More information

IMS K Transactions Per Second (TPS) Benchmark Roadblocks, Limitations, and Solutions

IMS K Transactions Per Second (TPS) Benchmark Roadblocks, Limitations, and Solutions Session 14772 IMS 13 100K Transactions Per Second (TPS) Benchmark Roadblocks, Limitations, and Solutions Presenter: Jack Yuan 1 With THANKS to Dave Viguers, Kevin Hite, and IMS performance team Bruce Naylor

More information

Understanding z/osmf for the Performance Management Sysprog

Understanding z/osmf for the Performance Management Sysprog Glenn Anderson, IBM Lab Services and Training Understanding z/osmf for the Performance Management Sysprog Winter SHARE March 2014 Session 55220 z/osmf: the z/os Management Facility z/osmf is a new product

More information

IBM Tools Base for z/os Version 1 Release 6. IMS Tools Knowledge Base User's Guide and Reference IBM SC

IBM Tools Base for z/os Version 1 Release 6. IMS Tools Knowledge Base User's Guide and Reference IBM SC IBM Tools Base for z/os Version 1 Release 6 IMS Tools Knowledge Base User's Guide and Reference IBM SC19-4372-02 IBM Tools Base for z/os Version 1 Release 6 IMS Tools Knowledge Base User's Guide and Reference

More information

IBM Tivoli Decision Support for z/os Version Administration Guide and Reference IBM SH

IBM Tivoli Decision Support for z/os Version Administration Guide and Reference IBM SH IBM Tivoli Decision Support for z/os Version 1.8.2 Administration Guide and Reference IBM SH19-6816-14 IBM Tivoli Decision Support for z/os Version 1.8.2 Administration Guide and Reference IBM SH19-6816-14

More information

ISPF Users Boot Camp - Part 2 of 2

ISPF Users Boot Camp - Part 2 of 2 Interactive System Productivity Facility (ISPF) ISPF Users Boot Camp - Part 2 of 2 SHARE 116 Session 8677 Peter Van Dyke IBM Australia SHARE 116, Winter 2011 pvandyke@au1.ibm.com Introduction Our jobs

More information

DB2 Performance A Primer. Bill Arledge Principal Consultant CA Technologies Sept 14 th, 2011

DB2 Performance A Primer. Bill Arledge Principal Consultant CA Technologies Sept 14 th, 2011 DB2 Performance A Primer Bill Arledge Principal Consultant CA Technologies Sept 14 th, 2011 Agenda Performance Defined DB2 Instrumentation Sources of performance metrics DB2 Performance Disciplines System

More information

SystemPac Enrichment Form for IBM Rapid Deployment of z/os and DB2 (Version 1.1)

SystemPac Enrichment Form for IBM Rapid Deployment of z/os and DB2 (Version 1.1) IBM Global Technology Services SystemPac Enrichment Form for IBM Rapid Deployment of z/os and DB2 (Version 1.1) Updated: 2/10/2010 Customer Name Internal Use Only Page 1 of 19 Table of Contents Preface

More information

COMPUTER EDUCATION TECHNIQUES, INC. (JCL ) SA:

COMPUTER EDUCATION TECHNIQUES, INC. (JCL ) SA: In order to learn which questions have been answered correctly: 1. Print these pages. 2. Answer the questions. 3. Send this assessment with the answers via: a. FAX to (212) 967-3498. Or b. Mail the answers

More information

IBM Tivoli OMEGAMON XE on z/os Version 5 Release 1. User s Guide SC

IBM Tivoli OMEGAMON XE on z/os Version 5 Release 1. User s Guide SC IBM Tivoli OMEGAMON XE on z/os Version 5 Release 1 User s Guide SC27-4028-00 IBM Tivoli OMEGAMON XE on z/os Version 5 Release 1 User s Guide SC27-4028-00 Note Before using this information and the product

More information

IBM PDTools for z/os. Update. Hans Emrich. Senior Client IT Professional PD Tools + Rational on System z Technical Sales and Solutions IBM Systems

IBM PDTools for z/os. Update. Hans Emrich. Senior Client IT Professional PD Tools + Rational on System z Technical Sales and Solutions IBM Systems IBM System z AD Tage 2017 IBM PDTools for z/os Update Hans Emrich Senior Client IT Professional PD Tools + Rational on System z Technical Sales and Solutions IBM Systems hans.emrich@de.ibm.com 2017 IBM

More information

IBM. Container Pricing for IBM Z. z/os. Version 2 Release 3

IBM. Container Pricing for IBM Z. z/os. Version 2 Release 3 z/os IBM Container Pricing for IBM Z Version 2 Release 3 Note Before using this information and the product it supports, read the information in Notices on page 129. This edition applies to Version 2 Release

More information

EMC for Mainframe Tape on Disk Solutions

EMC for Mainframe Tape on Disk Solutions EMC for Mainframe Tape on Disk Solutions May 2012 zmainframe Never trust a computer you can lift! 1 EMC & Bus-Tech for Mainframe EMC supports mainframe systems since 1990 with first integrated cached disk

More information

Paradigm Shifts in How Tape is Viewed and Being Used on the Mainframe

Paradigm Shifts in How Tape is Viewed and Being Used on the Mainframe Paradigm Shifts in How Tape is Viewed and Being Used on the Mainframe Ralph Armstrong EMC Corporation February 5, 2013 Session 13152 2 Conventional Outlook Mainframe Tape Use Cases BACKUP SPACE MGMT DATA

More information

Basi di Dati Complementi. Mainframe

Basi di Dati Complementi. Mainframe Basi di Dati Complementi 3.1. DBMS commerciali DB2-3.1.2 Db2 in ambiente mainframe Andrea Maurino 2007 2008 Mainframe 1 Mainframe Terminologia Mainframe Storage Management Subsystem (SMS) Is an automated

More information

IBM. PDF file of IBM Knowledge Center topics. IBM Operations Analytics for z Systems. Version 2 Release 2

IBM. PDF file of IBM Knowledge Center topics. IBM Operations Analytics for z Systems. Version 2 Release 2 IBM Operations Analytics for z Systems IBM PDF file of IBM Knowledge Center topics Version 2 Release 2 IBM Operations Analytics for z Systems IBM PDF file of IBM Knowledge Center topics Version 2 Release

More information

WLM Quickstart Policy Update

WLM Quickstart Policy Update WLM Quickstart Policy Update Cheryl Watson Session 2541; SHARE 101 in Washington, D.C. August 12, 2003 Watson & Walker, Inc. publishers of Cheryl Watson s TUNING Letter & BoxScore WLM Quickstart Policy

More information

Introduction to Coupling Facility Requests and Structure (for Performance)

Introduction to Coupling Facility Requests and Structure (for Performance) Introduction to Coupling Facility Requests and Structure (for Performance) Instructor: Peter Enrico Email: Peter.Enrico@EPStrategies.com z/os Performance Education, Software, and Managed Service Providers

More information

VSAM Overview. Michael E. Friske Fidelity Investments. Session 11681

VSAM Overview. Michael E. Friske Fidelity Investments. Session 11681 VSAM Overview Michael E. Friske Fidelity Investments Session 11681 This Is a VSAM Overview Session This session is intended for those who know very little or nothing about VSAM. I will provide some basic

More information

Challenges of Capacity Management in Large Mixed Organizations

Challenges of Capacity Management in Large Mixed Organizations Challenges of Capacity Management in Large Mixed Organizations Glenn Schneck Sr. Enterprise Solutions Engineer ASG Software Solutions March 12, 2014 Session Number 15385 Topics Capacity planning challenges

More information

Non IMS Performance PARMS

Non IMS Performance PARMS Non IMS Performance PARMS Dave Viguers dviguers@us.ibm.com Edited By: Riaz Ahmad IBM Washington Systems Center Copyright IBM Corporation 2008 r SMFPRMxx Check DDCONS Yes (default) causes SMF to consolidate

More information

DB2 Data Sharing Then and Now

DB2 Data Sharing Then and Now DB2 Data Sharing Then and Now Robert Catterall Consulting DB2 Specialist IBM US East September 2010 Agenda A quick overview of DB2 data sharing Motivation for deployment then and now DB2 data sharing /

More information

Implementing Data Masking and Data Subset with Sequential or VSAM Sources

Implementing Data Masking and Data Subset with Sequential or VSAM Sources Implementing Data Masking and Data Subset with Sequential or VSAM Sources 2013 Informatica Corporation. No part of this document may be reproduced or transmitted in any form, by any means (electronic,

More information

IBM. MVS Planning: Workload Management. z/os. Version 2 Release 3 SC

IBM. MVS Planning: Workload Management. z/os. Version 2 Release 3 SC z/os IBM MVS Planning: Workload Management Version 2 Release 3 SC34-2662-30 Note Before using this information and the product it supports, read the information in Notices on page 259. This edition applies

More information

www.linkedin.com/in/jimliebert Jim.Liebert@compuware.com Table of Contents Introduction... 1 Why the Compuware Workbench was built... 1 What the Compuware Workbench does... 2 z/os File Access and Manipulation...

More information

Updates that apply to IBM DB2 Analytics Accelerator Loader for z/os V2R1 User's Guide (SC )

Updates that apply to IBM DB2 Analytics Accelerator Loader for z/os V2R1 User's Guide (SC ) Updates that apply to IBM DB2 Analytics Accelerator Loader for z/os V2R1 User's Guide (SC27-6777-00) Date of change: January 2018 Topic: Multiple Change description: Documentation changes made in support

More information

Introduction to Statistical SMF data

Introduction to Statistical SMF data Introduction to Statistical SMF data Lyn Elkins IBM ATS elkinsc@us.ibm.com Agenda What is SMF? What is MQ SMF? Overview of MQ statistical SMF Controlling the generation of the data Processing the data

More information

Version 10 Release 1.3. IBM Security Guardium S-TAP for IMS on z/os User's Guide IBM SC

Version 10 Release 1.3. IBM Security Guardium S-TAP for IMS on z/os User's Guide IBM SC Version 10 Release 1.3 IBM Security Guardium S-TAP for IMS on z/os User's Guide IBM SC27-8022-03 Version 10 Release 1.3 IBM Security Guardium S-TAP for IMS on z/os User's Guide IBM SC27-8022-03 Note:

More information

Mainframe Developer NO.2/29, South Dhandapani St, Burkit road, T.nagar, Chennai-17. Telephone: Website:

Mainframe Developer NO.2/29, South Dhandapani St, Burkit road, T.nagar, Chennai-17. Telephone: Website: Mainframe Developer Mainframe Developer Training Syllabus: IBM Mainframe Concepts Architecture Input/output Devices JCL Course Syllabus INTRODUCTION TO JCL JOB STATEMENT CLASS PRTY MSGCLASS MSGLEVEL TYPRUN

More information

Micro Focus The Lawn Old Bath Road Newbury, Berkshire RG14 1QN UK

Micro Focus The Lawn Old Bath Road Newbury, Berkshire RG14 1QN UK Data Express 4.0 Micro Focus The Lawn 22-30 Old Bath Road Newbury, Berkshire RG14 1QN UK http://www.microfocus.com Copyright Micro Focus 2009-2013. All rights reserved. MICRO FOCUS, the Micro Focus logo

More information

Using electronic mail to automate DB2 z/os database copy requests. CMG - 28 e 29 maggio Milano, Roma

Using electronic mail to automate DB2 z/os database copy requests. CMG - 28 e 29 maggio Milano, Roma Using electronic mail to automate DB2 z/os database copy requests CMG - 28 e 29 maggio 2014 - Milano, Roma Agenda 1. UnipolSai Environment 2. UnipolSai needs and problems 3. The initial solution - where

More information

IBM Application Performance Analyzer for z/os Version IBM Corporation

IBM Application Performance Analyzer for z/os Version IBM Corporation IBM Application Performance Analyzer for z/os Version 11 IBM Application Performance Analyzer for z/os Agenda Introduction to Application Performance Analyzer for z/os A tour of Application Performance

More information

Airline Control System V2.3 delivers a new base for exploiting 64-bit addressing

Airline Control System V2.3 delivers a new base for exploiting 64-bit addressing Software Announcement November 11, 2003 Airline Control System V2.3 delivers a new base for exploiting 64-bit addressing Overview Airline Control System (ALCS) is a control monitor designed to run in an

More information

Splunking Your z/os Mainframe Introducing Syncsort Ironstream

Splunking Your z/os Mainframe Introducing Syncsort Ironstream Copyright 2016 Splunk Inc. Splunking Your z/os Mainframe Introducing Syncsort Ironstream Ed Hallock Director of Product Management, Syncsort Inc. Disclaimer During the course of this presentation, we may

More information

Infosys. Working on Application Slowness in Mainframe Infrastructure- Best Practices-Venkatesh Rajagopalan

Infosys. Working on Application Slowness in Mainframe Infrastructure- Best Practices-Venkatesh Rajagopalan Infosys Working on Application Slowness in Mainframe Infrastructure- Best Practices-Venkatesh Rajagopalan Summary Abstract An energy utility client was facing real-time infrastructure issues owing to the

More information

System z13: First Experiences and Capacity Planning Considerations

System z13: First Experiences and Capacity Planning Considerations System z13: First Experiences and Capacity Planning Considerations Robert Vaupel IBM R&D, Germany Many Thanks to: Martin Recktenwald, Matthias Bangert and Alain Maneville for information to this presentation

More information

DB2 is a complex system, with a major impact upon your processing environment. There are substantial performance and instrumentation changes in

DB2 is a complex system, with a major impact upon your processing environment. There are substantial performance and instrumentation changes in DB2 is a complex system, with a major impact upon your processing environment. There are substantial performance and instrumentation changes in versions 8 and 9. that must be used to measure, evaluate,

More information

DB2 and Memory Exploitation. Fabio Massimo Ottaviani - EPV Technologies. It s important to be aware that DB2 memory exploitation can provide:

DB2 and Memory Exploitation. Fabio Massimo Ottaviani - EPV Technologies. It s important to be aware that DB2 memory exploitation can provide: White Paper DB2 and Memory Exploitation Fabio Massimo Ottaviani - EPV Technologies 1 Introduction For many years, z/os and DB2 system programmers have been fighting for memory: the former to defend the

More information

Speaker: Thomas Reed /IBM Corporation SHARE Seattle 2015 Session: 16956

Speaker: Thomas Reed /IBM Corporation SHARE Seattle 2015 Session: 16956 PDSE Nuts and Bolts Speaker: Thomas Reed /IBM Corporation SHARE Seattle 2015 Session: 16956 Insert Custom Session QR if Desired. Permission is granted to SHARE Inc. to publish this presentation paper in

More information

New monitoring method for enterprise critical applications

New monitoring method for enterprise critical applications New monitoring method for enterprise critical applications Dr Tomasz Cieplak SystemWork GmbH 07/11/2017 OC Agenda 1. Application monitoring 2. Facts about SMF records 3. Software for processing SMF records

More information

TUC TOTAL UTILITY CONTROL FOR DB2 Z/OS. TUC Unique Features

TUC TOTAL UTILITY CONTROL FOR DB2 Z/OS. TUC Unique Features TUC Unique Features 1 Overview This document is describing the unique features of TUC that make this product outstanding in automating the DB2 object maintenance tasks. The document is comparing the various

More information

CA Subsystem Analyzer for DB2 for z/os

CA Subsystem Analyzer for DB2 for z/os CA Subsystem Analyzer for DB2 for z/os User Guide Version 17.0.00, Fourth Edition This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred

More information

IBM. Container Pricing for IBM Z. z/os. Version 2 Release 3

IBM. Container Pricing for IBM Z. z/os. Version 2 Release 3 z/os IBM Container Pricing for IBM Z Version 2 Release 3 Note Before using this information and the product it supports, read the information in Notices on page 129. This edition applies to Version 2 Release

More information

NatQuery The Data Extraction Solution For ADABAS

NatQuery The Data Extraction Solution For ADABAS NatQuery The Data Extraction Solution For ADABAS Overview...2 General Features...2 Integration to Natural / ADABAS...5 NatQuery Modes...6 Administrator Mode...6 FTP Information...6 Environment Configuration

More information

SMF 101 Everything You Should Know and More

SMF 101 Everything You Should Know and More SMF 101 Everything You Should Know and More Cheryl Watson Watson & Walker, Inc. www.watsonwalker.com - home of Cheryl Watson s Tuning Letter, CPU Charts, BoxScore and GoalTender August 9, 2012 Session

More information

WebSphere MQ V Intro to SMF115 Lab

WebSphere MQ V Intro to SMF115 Lab WebSphere MQ V Intro to SMF115 Lab Intro to SMF115 Lab Page: 1 Lab Objectives... 3 General Lab Information and Guidelines... 3 Part I SMF 115 WMQ Statistical Data... 5 Part II Evaluating SMF115 data over

More information

Getting Started with Xpediter/Eclipse

Getting Started with Xpediter/Eclipse Getting Started with Xpediter/Eclipse This guide provides instructions for how to use Xpediter/Eclipse to debug mainframe applications within an Eclipsebased workbench (for example, Topaz Workbench, Eclipse,

More information