IBM Virtualization Engine TS7700 Series. VEHSTATS Decoder Version 1.9

Size: px
Start display at page:

Download "IBM Virtualization Engine TS7700 Series. VEHSTATS Decoder Version 1.9"

Transcription

1 IBM Virtualization Engine TS7700 Series VEHSTATS Decoder Version 1.9 Jim Fisher Advanced Technical Skills - Americas Page 1 of 65

2 1 Introduction This document provides a cross reference between the various VEHSTATS output files and the IBM Virtualization Engine TS7700 Series Statistical Data Format White Paper. This document provides a set of tables that correspond to the various VEHSTATS reports. The VEHSTATS generated abbreviated column and row headings are listed with the corresponding Record Name and Name from the white paper. A description field contains the field name for the statistical records. The description field also provides any additional pertinent information. The appropriate field in the statistical data format white paper should then be referenced for a detailed description of the row or column. The following VEHSTATS generated reports are cross referenced: H20VIRT H21ADP00 H21ADPXX H21ADPSU H30TVC1 H32TDU12 H32CSP H32GUP01 H33GRID HOURFLOW AVGRDST DAYSMRY DAYSMRY Alphabetical order MONSMRY MONSMRY Alphabetical order COMPARE HOURFLAT/DAYHSMRY/WEKHSMRY Alphabetical order This document should be used in conjunction with the IBM Virtualization Engine TS7700 Series Statistical Data Format White Paper which can be found on Techdocs. Page 2 of 65

3 Change History: V1.0 Original Version IBM Virtualization Engine TS7700 Series VEHSTATS Decoder January 2014 V1.1 12/06/2010 o Updated H32GUP01 to reflect new format V1.2 12/15/2010 o Updated H32GUP01 to reflect the newest new format V1.3 1/30/2012 o Add note that the columns in DAYHSMRY and WEKHSMRY are described by the HOURFLAT section. o Updated fields to use MiB and GiB instead of MB and GB. V1.4 3/4/2013 o Add decoder for HOURFLOW report o Add R3.0 related fields to H30TVC1 report o Refreshed HOURFLAT chapter to bring it up to date o Other minor updates V1.5 3/12/2013 o Add cache throughput fields and UTC_OFFSET field to HOURFLAT alphabetical section o Added rows for HOURFLOW that were omitted in V1.4 V1.6 4/16/2013 o Change Active GiB EOI to Active GB EOI in DAYSMRY and MONSMRY V1.7 o Spell MONSUMRY and DAYSUMRY correctly as MONSMRY and DAYSMRY V1.8 o Update: H20VIRT Add throughput delay columns which are available starting in R3.0 H21ADPSU Add device read and write rate as computed by VEHSTATS H30TVC1 Change GiB RES CACHE to GB RES CACHE so it matches the units used to display the disk cache size H31IMEX Add this report H32CSP Updated example to show JC and JK media types H32GUP01 Change ACTIVE GiB to ACTIVE GB so it matches the units used to display the disk cache size H33GRID Add Immediate, Deferred, and Synchrous copy columns DAYSMRY Changes made to both Reporting Order and Alphabetical Order Change Active GiB EOI to Active GB EOI Change GiB to MiB as appropriate Add four fields to PERFORMANCE BY PG section: All MiB to Mig EOI, All MiB to Mig MAX, All MiB to Cpy EOI, and All MiB to Cpy MAX. Add Import/Export fields Add copy performance fields GRID COPY RECEIVER SNAPSHOT Change VV to copy EOI to VV to Recv EOI and MiB to copy EOI to MiB to Recv EOI. This removes ambiguity as to the direction of the copy. USAGE BY POOL changes GiB to GB for POOL xx ACT GB EOI, POOL xx GB WRT SUM, and POOL xx GB RD SUM. MONSMRY - Changes made to both Reporting Order and Alphabetical Order Page 3 of 65

4 Change Days w/activity to Host Use Days Change Active GiB to Active GB Add Max MiB to MIG and Max MiB to CPY to PERFORMANCE by PG section Add fields USAGE BY POOL changes GiB to GB for POOL xx ACT GB, POOL xx GB WRT, and POOL xx GB RD. HOURFLAT Change PGx_GiB_in_TVC to PGx_GB_in_TVC Change POOL_xx_ACT_GiB to POOL_xx_ACT_GB Adjust descrition of Avg_Clus_Util and Max_Clus_Util to indicate this field only includes CPU with R3.0+. Add the following fields: o UTC_OFFSET o Avg_Disk_Util o Max_Disk_Util o Thr_Dly_Av_Sec o Thr_Dly_Mx_Sec o Thr_Dly_Percent V1.9 January 2014 o Add avg and max ahead and behind counts from Virtual Device record H20VIRT o Add total used cache and total used flash cache from Hnode HSM Record H30TVC1 o Add removed time delayed copies average age and time delayed copies removal count from Hnode HSM Record H30TVC1 o Add time delayed copy queue from Hnode Grid Record H33GRID Page 4 of 65

5 1 Introduction H20VIRT H21ADP0x H21ADPXX H21ADPSU H21ADPSU H30TVC H31IMEX H32TDU H32CSP H32GUP H33GRID HOURFLOW AVGRDST DAYSMRY Report Order DAYSMRY - Alphabetical Order MONSMRY Report Order MONSMRY Alphabetical Order COMPARE HOURFLAT - Alphabetical Disclaimers: Page 5 of 65

6 2 H20VIRT 1(C) IBM REPORT=H20VIRT (14009) VNODE VIRTUAL DEVICE HISTORICAL RECORDS RUN ON 13:14:33 PAGE 1 GRID#=BA008 DIST_LIB_ID= 0 VNODE_ID= 0 NODE_SERIAL=CL02736P VE_CODE_LEVEL= UTCMINUS=07 10OCT13TH -VIRTUAL_DRIVES- THROUGHPUT CLUSTER VS FICON CHANNEL RECORD --MOUNTED-- MAX DELAY_SECS AHEAD AHEAD BEHIND BEHIND CHANNEL_BLOCKS_WRITTEN_FOR_THESE_BLOCKSIZES TIME INST MIN AVG MAX THRPUT MAX AVG PCT MAX AVG MAX AVG <=2048 <=4096 <=8192 <=16384 <=32768 <=65536 >65536 R2.2 <---R > < R > 00:15: MAX <---R > H20VIRT VNODE VIRTUAL DEVICE HISTORICAL RECORDS Field name Record Name Name Description Header Related Fields -VIRTUAL DRIVES- INST -VIRTUAL DRIVES- --MOUNTED-- MIN AVG MAX MAX THRPUT R2.2 THROUGHPUT DELAY_SECS MAX AVG PCT ---R > AHEAD AHEAD BEHIND BEHIND MAX AVG MAX AVG R CHANNEL BLOCKS WRITTEN FOR THESE BLOCKSIZES <=2048 <=4096 <=8192 <=16384 <=32768 <=65536 >65536 Body Related Fields Installed Virtual Devices Minimum/Average/Maximum Virtual Devices Mounted Configured Maximum Throughput Maximum Delay Average Delay Delay Interval Percentage Maximum ahead count Average ahead count Maximum behind count Average behind count Channel Blocks Written xxxxx-xxxxx Byte Range Page 6 of 65

7 3 H21ADP0x 1(C) IBM REPORT=H21ADP00(13247) VNODE ADAPTOR HISTORICAL ACTIVITY RUN ON 15:40:13 PAGE 1 GRID#=99110 DIST_LIB_ID= 0 VNODE_ID= 0 NODE_SERIAL=CL0H5233 VE_CODE_LEVEL= UTCMINUS=06 ADAPTOR 0 FICON-2 (ONLINE ) L DRAWER SLOT# 6 12MAR12MO PORT 0 MiB is 1024 based, MB is 1000 based PORT 1 RECORD GBS MiB CHANNEL DEVICE GBS MiB CHANNEL DEVICE TIME RTE sec RDMiB /sec WRMiB /sec RDMib COMP WRMib COMP RTE sec RDMiB /sec WRMiB /sec RDMiB COMP WRMiB COMP 01:00: There are 2 or 4 of these reports, one for each FICON adapter: H21ADP00, H21ADP01, H21ADP02, and H21ADP03 H21ADP0x VNODE ADAPTOR HISTORICAL ACTIVITY Field name Record Name Name Description Header Related Fields ADAPTOR x Vnode Adapter Vnode Adapter Based on which set of data in the container FICON-x Vnode Adapter Vnode Adapter Adapter Type ( ) Vnode Adapter Vnode Adapter Adapter State x DRAWER Vnode Adapter Vnode Adapter HBS Drawer SLOT# x Vnode Adapter Vnode Adapter HBA Slot PORT x Vnode Adapter Vnode Adapter-Port Based on which set of data in the container Body Related Fields GBS Vnode Adapter Vnode Adapter-Port Maximum Data Rate RTE MiB sec Vnode Adapter Vnode Adapter-Port Actual Data Rate CHANNEL Vnode Adapter Vnode Adapter-Port Bytes Read by the Channel RDMiB /sec WRMiB /sec MiB/s computed by VEHSTATS Bytes Written by the Channel DEVICE RDMiB COMP WRMiB COMP MiB/s computed by VEHSTATS Vnode Adapter Vnode Adapter-Port Bytes Read by Virtual Devices Compression ratio computed by VEHSTATS Bytes Written to Virtual Devices Compression ratio computed by VEHSTATS Page 7 of 65

8 4 H21ADPXX 1(C) IBM REPORT=H21ADPXX(13247) VNODE ADAPTOR HISTORICAL ACTVTY COMBINED RUN ON 15:40:13 PAGE 1 GRID#=99110 DIST_LIB_ID= 0 VNODE_ID= 0 NODE_SERIAL=CL0H5233 VE_CODE_LEVEL= UTCMINUS=06 12MAR12MO -----ADAPTOR 0 FICON ADAPTOR 1 FICON ADAPTOR 2 FICON ADAPTOR 3 FICON RECORD TOTAL ---CHANNEL DEVICE CHANNEL DEVICE CHANNEL DEVICE CHANNEL DEVICE---- TIME MiB/s RDGib WRGiB RDGiB WRGiB RDGiB WRGiB RDGiB WRGiB RDGiB WRGiB RDGiB WRGiB RDGiB WRGiB RDGiB WRGiB 01:00: The values in this report are summed by VEHSTATS using the data from each of the individual adapters: H21ADP00, H21ADP01, H21ADP02, and H21ADP03 H21ADPXX VNODE ADAPTOR HISTORICAL ACTIVITY COMBINED Field name Record Name Name Description Header Related Fields ADAPTOR x Vnode Adapter Vnode Adapter Based on which set of data in the container FICON-x Vnode Adapter Vnode Adapter Adapter Type Body Related Fields TOTAL MiB/s Vnode Adapter Vnode Adapter Actual Data Rate ---CHANNEL--- Vnode Adapter Vnode Adapter-Port RDGiB WRGiB Bytes Read by the Channel ---DEVICE---- RDGiB WRGiB Bytes Written by the Channel Vnode Adapter Vnode Adapter-Port Bytes Read by Virtual Devices Bytes Written to Virtual Devices Page 8 of 65

9 5 H21ADPSU 1(C) IBM REPORT=H21ADPSU(11206) VNODE ADAPTOR HISTORICAL ACTVTY COMBINED RUN ON 9:37:16 PAGE 1 GRID#=55555 DIST_LIB_ID= 0 VNODE_ID= 0 NODE_SERIAL=CL0H3128 VE_CODE_LEVEL= UTCMINUS=05 02JAN12MO Chan Device WRTHR CPTHR DCTHR MiB is 1024 based, MB is 1000 based RECORD Total Total %RLTV %RLTV SEC CHANNEL DEVICE TIME MiB/s MiB/s IMPAC IMPAC /IO RDGiB MiB/s WRGiB MiB/s RDGiB MiB/s COMP WRGiB MiB/s COMP 04:00: Some of the values in this report are computed by VEHSTATS using the data from each of the individual adapters: H21ADP00, H21ADP01, H21ADP02, and H21ADP03 H21ADPSU VNODE ADAPTOR HISTORICAL ACTIVITY COMBINED Field name Record Name Name Description Header Related Fields Chan Total MiB/s Device Total MiB/s WRTHR %RLTV IMPAC CPTHR %RLTV IMPAC DCTHR SEC /IO CHANNEL RDGiB MiB/s WRGiB MiB/s DEVICE RDGiB MiB/s COMP WRGiB MiB/s COMP Body Related Fields Vnode Adapter Vnode Adapter Actual Data Rate Vnode Adapter Vnode Adapter-Port Bytes Read by Virtual Devices Bytes Written to Virtual Devices Hnode HSM HSM-Cache Computed by VEHSTATS using: Percent Host Write Throttle Average Host Write Throttle Equation is shown at bottom of table. Hnode HSM HSM-Cache Computed by VEHSTATS using: Percent Copy Throttle Average Copy Throttle Equation is shown at bottom of table. Hnode HSM HSM-Cache Average Deferred Copy Throttle Vnode Adapter Vnode Adapter-Port Bytes Read by the Channel MiB/s computed by VEHSTATS Bytes Written by the Channel MiB/s computed by VEHSTATS Vnode Adapter Vnode Adapter-Port Bytes Read by Virtual Devices MiB/s computed by VEHSTATS Compression ratio computed by VEHSTATS Bytes Written to Virtual Devices MiB/s computed by VEHSTATS Compression ratio computed by VEHSTATS Page 9 of 65

10 (# 30 sec samples with throttling) * (avg throttle value) * (100 to express as %) %Relative Impact (%RLTV IMPAC) = (# 30 sec samples in interval) * (2 sec max value) Page 10 of 65

11 6 H21ADPSU 1(C) IBM REPORT=H21ADPSU(13221) VNODE ADAPTOR THROUGHPUT DISTRIBUTION RUN ON 12:43:22 PAGE 11 GRID#=99110 DIST_LIB_ID= 0 VNODE_ID= 0 NODE_SERIAL=CL0H5233 VE_CODE_LEVEL= UTCMINUS=06 MB/SEC_RANGE #INTERVALS PCT ACCUM% This report shows the distribution of the host data rate (uncompressed). H21ADPSU VNODE ADAPTOR THROUGHPUT DISTRIBUTION Field name Record Name Name Description Header Related Fields Body Related Fields MB/SEC_RANGE Vnode Adapter Vnode Adapter Actual Data Rate #INTERVALS N/A N/A Number of intervals in sample period PCT N/A N/A Percentage of total intervals in the range ACCUM% N/A N/A Cumulative percentage of total intervals in the range Page 11 of 65

12 7 H30TVC1 1(C) IBM REPORT=H30TVC1 (13221) HNODE HSM HISTORICAL CACHE PARTITION RUN ON 12:43:22 PAGE 1 GRID#=99110 DIST_LIB_ID= 0 VNODE_ID= 0 NODE_SERIAL=CL0H5233 VE_CODE_LEVEL= HNODE=ACTIVE UTCMINUS=06 PARTITION SIZE= GB(=1000MiB) TVC_SIZE= GB(1000MiB) WRITE_THROTTLING COPY_THROTTLING MAR12MO TOTAL_ FAST_RDY CACHE_HIT CACHE_MIS SYNC_MODE P-MIG NUM NUM NUM %RLTV NUM NUM NUM %RLTV RECORD AVG MAX AVG MAX PART NUM AVG NUM AVG NUM AVG NUM AVG NUM AVG THROT 15MIN 30SEC SEC IMPAC 15MIN 30SEC SEC IMPAC END_TIME Clus_UTIL DISK_UTIL HIT% MNTS SECS MNTS SECS MNTS SECS MNTS SECS MNTS SECS VALUE INTVL SMPLS /IO VALUE REASN INTVL SMPLS /IO VALUE REASN R2.0 <--R3.0 PGA--> R2.1 R1.5 R3.0 R3.0 01:00: x x0000 The title of the report indicates this is for cache partition 1. Although the architecture is designed for 8 cache partitions, there is only one partition in today s code. This report is decoded in two sections due to its large number of columns. H30TVC1 HNODE HISTORICAL CACHE PARTITION Part 1 Field name Record Name Name Description Header Related Fields PARTITION SIZE=xxxxxxx Hnode HSM HSM-Cache-Partition Partition Size TVC_SIZE=xxxxxxx Hnode HSM HSM-Cache TVC Size Body Related Fields AVG MAX AVG MAX Hnode HSM HSM-Cache For R2.0 through Pre-R3.0 PGA1 code CLUS_UTIL or CPU_UTIL levels the AVG CLUS_UTIL field contains the Average Cluster Utilization percentage. The Maximum field is zero. This is the greater of CPU Utilization and Disk Cache Throughput Utilization. AVG MAX DISK_UTIL PART HIT% TOTAL_ NUM MNTS For R3.0 PGA1 or higher these fields contain the Average and Maximum CPU Usage percentage Hnode HSM HSM-Cache Average Maximum Disk Usage Percentage Maximum Disk Usage Percentage These values first reported in R3.0 PGA1. Hnode HSM HSM-Cache-Partition Computed by VEHSTATS by adding the number of fast ready and cache hit mounts and dividing the sum by the total number of mounts including cache miss mounts. Hnode HSM HSM-Cache-Partition Computed by VEHSTATS using: Fast Ready Mounts Cache Hit Mounts Cache Miss Mounts Page 12 of 65

13 TOTAL_ AVG SECS FAST_RDY NUM AVG MNTS SECS CACHE_HIT NUM AVG MNTS SECS CACHE_MIS NUM AVG MNTS SECS SYNC_MODE NUM AVG MNTS SECS P-MIG THROT VALUE R WRITE_THROTTLING NUM NUM NUM 15MIN 30SEC SEC INTVL SMPLS /IO WRITE_THROTTLING %RLTV IMPAC VALUE WRITE_THROTTLING REASN COPY_THROTTLING NUM NUM NUM 15MIN 30SEC SEC INTVL INTVL /IO Hnode HSM HSM-Cache-Partition Computed by VEHSTATS using: Fast Ready Mounts Average Fast Ready Mount Time Cache Hit Mounts Average Cache Hit Mount Time Cache Miss Mounts Average Cache Miss Mount Time Hnode HSM HSM-Cache-Partition Fast Ready Mounts Average Fast Ready Mount Time Hnode HSM HSM-Cache-Partition Cache Hit Mounts Average Cache Hit Mount Time Hnode HSM HSM-Cache-Partition Cache Miss Mounts Average Cache Miss Mount Time Hnode HSM HSM-Cache-Partition Sync Level Mounts Sync Level Mount Time These values first reported with R2.1. Hnode HSM HSM-Cache Pre-migration Throttle Threshold Hnode HSM HSM-Cache Number of 15 minute intervals being reported. Not a field in statistics record. Computed from Percent Host Write Throttle and sample period length Average Host Write Throttle Hnode HSM HSM-Cache Computed by VEHSTATS using: Percent Host Write Throttle Average Host Write Throttle Equation is shown at bottom of table. Hnode HSM HSM-Cache Host Write Throttle Reason(s) This value first reported with R3.0 Hnode HSM HSM-Cache Number of 15 minute intervals being reported. Not a field in statistics record. Computed from Percent Copy Throttle and sample period length Average Copy Throttle Page 13 of 65

14 COPY_THROTTLING &RLTV IMPAC VALUE COPY_THROTTLING REASN Hnode HSM HSM-Cache Computed by VEHSTATS using: Percent Copy Throttle Average Copy Throttle Equation is shown at bottom of table. Hnode HSM HSM-Cache Copy Throttle Reason(s) This value first reported with R3.0 (# 30 sec samples with throttling) * (avg throttle value) * (100 to express as %) %Relative Impact (%RLTV IMPAC) = (# 30 sec samples in interval) * (2 sec max value) Page 14 of 65

15 H30TVC1 Continued. ----DEFER_COPY_THROTTLING PREFERENCE_GROUP_ NUM NUM AVG VIRT GB GiBTO GiBTO ROLLING_AV_MAX TIME_DELAY_COPY 15MIN 30SEC SEC BASE VOLS RES PRE COPY -TIME_IN_CACHE -VIRT_VOLS_MIG- LVOLS_REMOVED INTVL SMPLS /INTVL SECS REASN CACHE CACHE MIG OUT 4HR 48HR 35DA 4HR 48HR 35DA AV_AGE COUNT R R3.0 R1.5 R1.5 -ON_THE_HOUR-- --ON_THE_HOUR-- -EVERY_$_HOURS x M 67M 36M 119 2K 69K PREFERENCE_GROUP_ VIRT GB GiBTO GiBTO ROLLING_AV_MAX TIME_DEALY_COPY TOTAL TOTAL VOLS RES PRE COPY -TIME_IN_CACHE -VIRT_VOLS_MIG- LVOLS_REMOVED TVC_GB TVC_GB DR CACHE CACHE MIG OUT 4HR 48HR 35DA 4HR 48HR 35DA AV_AGE COUNT USED FLASH VOLSER R1.5 R.15 -ON_THE_HOUR-- --ON_THE_HOUR -EVERY_4_HOURS- <----R3.1---> H 52H 71H 90 8K 155K C039 H30TVC1 HNODE HISTORICAL CACHE PARTITION Part 2 Field name Record Name Name Description ----DEFER_COPY_THROTTLING---- Hnode HSM HSM-Cache Number of 15 minute intervals NUM NUM AVG being reported. Not a field in 15MIN 30SEC SEC BASE statistics record. INTVL SMPLS /INTVL SECS Computed from Percent Deferred R Copy Throttle and sample period length Average Deferred Copy Throttle Base Deferred Copy Throttle ----DEFER_COPY_THROTTLING---- Hnode HSM HSM-Cache Deferred Copy Throttle Reason(s) REASN R3.0 Header Related Fields PREFERENCE_GROUP_x Hnode HSM HSM Cache Partition Body Related Fields VIRT VOLS CACHE GB RES CACHE GiBTO PRE MIG R1.5 GiBTO COPY OUT R1.5 Hnode HSM HSM Cache Partition Hnode HSM HSM Cache Partition Hnode HSM HSM Cache Partition Hnode HSM HSM Cache Partition This value first reported with R3.0 Indicates which preference group, 0 or 1, the columns belong to. Virtual Volumes in Cache Data Resident in Cache divided by 1000 to convert MB to GB Unmigrated Data divided by 1000 to convert MiB to GiB Awaiting Replication to available Clusters divided by 1000 to convert MiB to GiB Page 15 of 65

16 ROLLING_AV_MAX -TIME_IN_CACHE 4HR 48HR 35DA -ON_THE_HOUR -VIRT_VOLS_MIG- 4HR 48HR 35DA --ON_THE_HOUR-- TIME_DELAY_COPY LVOLS_REMOVED AV_AGE COUNT -EVERY_4_HOURS- TOTAL TOTAL TVC_GB TVC_GB USED FLASH <----R3.1---> DR VOLSER Hnode HSM HSM Cache Partition Hnode HSM HSM Cache Partition Hnode HSM HSM - Cache Partition 4 Hour Average Cache Age 48 Hour Average Cache Age 35 Day Average Cache Age Volumes Migrated Last 4 Hours Volumes Migrated Last 48 Hours Volumes Migrated Last 35 Days Removed time delayed copies average age Time delayed copies removal count Hnode HSM HSM Cache Total used cache Total used flash cache Hnode HSM HSM Disaster Recovery Disaster Recovery Volser Page 16 of 65

17 8 H31IMEX 1(C) IBM REPORT=H31IMEX (13221) HNODE EXPORT/IMPORT HISTORICAL ACTIVITY RUN ON 12:43:22 PAGE 48 GRID#=99110 DIST_LIB_ID= 4 VNODE_ID= 0 NODE_SERIAL=CL4H5149 VE_CODE_LEVEL= HNODE=ACTIVE UTCMINUS=06 19MAR12MO PHYS PHYS VIRT VIRT RECORD VOLS VOLS VOLS VOLS MB_DATA MB_DATA TIME IMPORT EXPORT IMPORT EXPORT IMPORTED EXPORTED 01:00: PHYS VOLS IMPORT PHYS VOLS EXPORT VIRT VOLS IMPORT VIRT VOLS EXPORT MB_DATA IMPORTED MB_DATA EXPORTED H31IMEX HNODE EXPORT/IMPORT HISTORICAL ACTIVITY Field name Record Name Name Description Header Related Fields Body Related Fields Physical Volumes Imported Physical Volumes Exported Logical Volumes Imported Logical Volumes Exported Amount of data imported Amount of data exported Page 17 of 65

18 9 H32TDU12 1(C) IBM REPORT=H32TDU12(13247) HNODE LIBRARY HISTORICAL DRIVE ACTIVITY RUN ON 15:40:13 PAGE 1 GRID#=99110 DIST_LIB_ID= 2 VNODE_ID= 0 NODE_SERIAL=CL2H5249 VE_CODE_LEVEL= (# ) UTCMINUS=06 12MAR12MO PHYSICAL_DRIVES_3592-E PHYSICAL_DRIVES_NONE RECORD --MOUNTED-- -MOUNT_SECS- ----MOUNTS_FOR MOUNTED-- -MOUNT_SECS- ----MOUNTS_FOR----- TIME INST AVL MIN AVG MAX MIN AVG MAX STG MIG RCM SDE TOT INST AVL MIN AVG MAX MIN AVG MAX STG MIG RCM SDE TOT 01:00: H32TDU12 HNODE LIBRARY HISTORICAL DRIVE ACTIVITY Field name Record Name Name Description Header Related Fields PHYSICAL_DRIVES_3592-E05 Hnode Library Tape Device Device Class ID PHYSICAL_DRIVES_NONE Indicates there isn t a second device type. Currently the TS7700 only supports one device type at a time. Body Related Fields INST Hnode Library Tape Device Installed Physical Devices AVL Hnode Library Tape Device Available Physical Devices --MOUNTED MIN AVG MAX -MOUNT_SECS- MIN AVG MAX ----MOUNTS_FOR----- STG MIG RCM SDE TOT Hnode Library Tape Device Minimum Physical Devices Mounted Average Physical Devices Mounted Maximum Physical Devices Mounted Hnode Library Tape Device Minimum Physical Mount Time Average Physical Mount Time Maximum Physical Mount Time Hnode Library Tape Device Physical Recall Mounts Physical Pre-Migrate Mounts Physical Reclaim Mounts Physical Security Data Erase Mounts TOT is Total physical mounts and is computed by VEHSTATS from the four other physical mount fields. Page 18 of 65

19 10 H32CSP 1(C) IBM REPORT=H32CSP (11206) HNODE LIBRARY HIST SCRTCH POOL ACTIVITY RUN ON 9:37:16 PAGE 1 GRID#=55555 DIST_LIB_ID= 0 VNODE_ID= 0 NODE_SERIAL=CL0H3128 VE_CODE_LEVEL= UTCMINUS=05 02JAN12MO SCRATCH_STACKED_VOLUMES_AVAILABLE_BY_TYPE RECORD TIME 3592JA 3592JJ 3592JB 3592JC 3592JK 01:00: H32CSP HNODE LIBRARY HISTORICAL SCRATCH POOL ACTIVITY Field name Record Name Name Description Header Related Fields SCRATCH_STACKED_VOLUMES_AVAILABLE_BY_TYPE This is just a header Body Related Fields 3592xx Hnode Library Library - Pooling Common Scratch Pool (CSP) type (xx) is from the Physical Type field Physical Count Page 19 of 65

20 11 H32GUP01 1(C) IBM REPORT=H32GUP01(11206) HNODE LIBRARY HIST GUP/POOLING ACTIVITY RUN ON 9:37:16 PAGE 01 GRID#=55555 DIST_LIB_ID= 0 VNODE_ID= 0 NODE_SERIAL=CL0H3128 VE_CODE_LEVEL= (# ) UTCMINUS=05 02JAN12MO POOL E JA( 640) RECORD ACTIVE ACTIVE MiB MiB RECLAIM WAIT READ UN WAIT READ UN TIME LVOLS GB WRITTN READ PCT POL SCR 92JA SDE ONLY AVAIL SCR PRIV SDE ONLY AVAIL UPD INT=> -ON_THE_HOUR ON_THE_HOUR ON_THE_HOUR :00: Report H32GUP01 is for pool 01 and 02 volumes, H32GUP03 is for pool 03 and 04 volumes, and so forth. H32GUP0x HNODE LIBRARY HISTORICAL GUP/POOLING ACTIVITY Field name Record Name Name Description Header Related Fields POOL xx yyyy-zzz Hnode Library Library - Pooling General Use Pool (GUP) ACTIVE ACTIVE LVOLS GB -ON_THE_HOUR- MiB WRITTN MiB READ RECLAIM- PCT POOL WAIT READ UN SCR 92JA SDE ONLY AVAIL ON_THE_HOUR Body Related Fields Hnode Library Library - Pooling General Use Pool (GUP) Hnode Library Library - Pooling General Use Pool (GUP) Hnode Library Library - Pooling General Use Pool (GUP) Hnode Library Pooling GUP - Reclaim Hnode Library Library - Pooling GUP - There are 32 sets of data, one for each of the 32 general use pools. The pool number is listed (xx) The device type is listed based on the Device Class field. Active Logical Volumes Active Data Data Written to Pool Data Read from Pool Reclaim Threshold Pool number based on which GUP is being reported. Each pool provides data for up to 2 media types. Scratch Volume Count Private Volume Count by media type Waiting for Security Data Erase Read Only Recovery Volume Count Unavailable Volume Count Page 20 of 65

21 12 H33GRID 1(C) IBM REPORT=H33GRID (13221) HNODE HISTORICAL PEER-TO-PEER ACTIVITY RUN ON 12:43:22 PAGE 1 GRID#=99110 DIST_LIB_ID= 0 VNODE_ID= 0 NODE_SERIAL=CL0H5233 VE_CODE_LEVEL= UTCMINUS=06 MiB is 1024 based, MB is 1000 based MiB_FR 12MAR12MO LVOLS MiB AV_DEF AV_RUN #_LVOLS LVOLS MB LVOLS MB LVOLS MB MiB_TO CALC V_MNTS MiB_XFR MiB_XFR 0-->1 CALC TO TO QUEAGE QUEAGE TIM_DLY TO_TVC_BY TO_TVC_BY TO_TVC_BY TVC_BY MiB/ DONE_BY FR_DL TO_DL TVC_BY MiB/ RECEIVE RECEIVE ---MINUTES--- CPY_QUE RUN_COPY DEF_COPY SYNC_COPY COPY SEC OTHR_DL RMT_WR RMT_RD COPY SEC 01:00: H33GRID HNODE HISTORICAL PEER-TO-PEER ACTIVITY Field name Record Name Name Description Header Related Fields LVOLS TO RECEIVE MiB TO RECEIVE AV_DEF AV_RUN QUEAGE QUEAGE ---MINUTES--- #_LVOLS TIM_DLY CPY_QUE LVOLS MB TO_TVC_BY RUN_COPY LVOLS MB TO_TVC_BY DEF_COPY LVOLS MB TO_TVC_BY SYNC_COPY MiB_TO TVC_BY COPY CALC MiB/ SEC Body Related Fields Hnode Grid Grid Logical Volumes for Copy Hnode Grid Grid Data to Copy Hnode Grid Grid Average Deferred Queue Age Average Immediate Queue Age Hnode Grid Grid Time delayed copy queue Hnode Grid Grid-Cluster Number of immediate copies that have completed Data Transferred into a cluster s Cache from other clusters as part of an Immediate copy operation Hnode Grid Grid-Cluster Number of deferred copies that have completed Data Transferred into a cluster s Cache from Other clusters as part of a deferred copy operation Hnode Grid Grid-Cluster Number of sync mode copies that have completed Data Transferred into a cluster s Cache from Other clusters as part of a sync mode copy operation Hnode Grid Grid-Cluster Data Transferred into a Cluster s Cache from other Clusters as part of a Copy Operation Hnode Grid Grid-Cluster Computed by VEHSTATS using the above field and dividing by the number of seconds in the interval Page 21 of 65

22 V_MNTS DONE_BY OTHR_DL MiB_XFR FR_DL RMT_WR MiB_XFR TO_DL RMT_RD MiB_FR x-->y TVC_BY COPY CALC MiB/ SEC Hnode Grid Grid-Cluster Logical Mounts Directed to other Clusters Hnode Grid Grid-Cluster Data Transferred into a Cluster s Cache from other Clusters as part of a Remote Write Operation Hnode Grid Grid-Cluster Data Transferred from a Cluster s Cache To Other Clusters as part of a Remote Read operation Hnode Grid Grid-Cluster Data Transferred From a Cluster s Cache To Other Clusters as part of a Copy Operation. The x is the source cluster number and the y is the target cluster. Hnode Grid Grid-Cluster Computed by VEHSTATS using the above field and dividing by the number of seconds in the interval Page 22 of 65

23 13 HOURFLOW 1(C) IBM REPORT=HOURFLOW (13059) DATA FLOW IN MiB/sec BY CLUSTER RUN ON 11:22:12 PAGE 1 GRID#=0002C DIST_LIB_ID=02 NODE_SERIAL=CL2CL2H5 VE_CODE_LEVEL= Avg Max Avg Max MiB/s MiB/s MiB/s MiB/s MiB/s MiB/s MiB/s Queue Queue Queue Write Copy Avg MiB/s MiB/s xxxx xxxx Disk Disk Total To_TVC Fr_TVC To_TVC Fr_TVC To_TVC Fr_TVC GiB_to GiB_to GiB_to Throt Throt msec To_TVC Fr_TVC Intvl Date Day Time Util Util Util Util Xfer Dev_Wr Dev_Rd Recv Sent Recall PreMig PreMig Copy Recv Impac% Impac% DCThrt RMT_WR RMT_RD Sec 02DEC2012 Sun 01:00: HOURFLOW DATA FLOW IN MiB/sec BY CLUSTER Field name Record Name Name Description Header Related Fields Avg Avg Clus or CPU Util Util Max Max Clus or CPU Util Util Avg Disk Util Max Disk Util Body Related Fields Hnode HSM HSM-Cache For R2.0 through Pre-R3.0 PGA1 code levels this field contains the Average Cluster Utilization percentage. This is the greater of CPU Utilization and Disk Cache Throughput Utilization. For R3.0 PGA1 or higher this field contains the Average CPU Usage percentage Hnode HSM HSM-Cache For Pre-R3.0 PGA1 code levels this field is zero. For R3.0 PGA1 or higher this field contains the Maximum CPU Usage Percentage. Hnode HSM HSM-Cache Average Maximum Disk Usage Percentage Reported with R3.0 PGA1 code or higher. Hnode HSM HSM-Cache Maximum Disk Usage Percentage Reported with R3.0 PGA1 code or higher. Page 23 of 65

24 MiB/s Total Xfer MiB/s To_TVC Dev_Wr MiB/s Fr_TVC Dev_Rd MiB/s To_TVC Recv Vnode Adapter Hnode Grid Hnode Library Vnode Adapter-Port Grid-Cluster Library Pooling General Use Pool (GUP) The rate of compressed data written and read to/from the disk cache. The following are added together by VEHSTATS to generate this field. Bytes Read by Virtual Devices Bytes Written to Virtual Devices Data Transferred into a Cluster s Cache from other Clusters as part of a Copy Operation Data Transferred From a Cluster s Cache To Other Clusters as part of a Copy Operation. Data Read from Pool Data Written to Pool Data Transferred into a Cluster s Cache from other Clusters as part of a Remote Write Operation Data Transferred from a Cluster s Cache To Other Clusters as part of a Remote Read operation Vnode Adapter Vnode Adapter-Port The rate of compressed writes to the disk cache from the Host Bus Adapters (HBA) Bytes Written to Virtual Devices Vnode Adapter Vnode Adapter-Port The rate of compressed reads from the disk cache to the host bus adapters. Bytes Read by Virtual Devices Hnode Grid Grid-Cluster Rate of compressed copies received from the grid into this cluster s disk cache. Data Transferred into a Cluster s Cache from other Clusters as part of a Copy Operation. MiB/s Fr_TVC Sent Computed by VEHSTATS using the above field and dividing by the number of seconds in the interval. Hnode Grid Grid-Cluster Rate of compressed copies sent from this cluster s disk cache to the grid. Data Transferred From a Cluster s Cache To Other Clusters as part of a Copy Operation. Computed by VEHSTATS using the above field and dividing by the number of seconds in the interval. Page 24 of 65

25 MiB/s To_TVC Recall MiB/s Fr_TVC PreMig Queue GiB_to Copy Hnode Library Library - Pooling General Use Pool (GUP) Hnode Library Library - Pooling General Use Pool (GUP) Hnode HSM HSM Cache Partition Rate of compressed data written to the disk cache from physical tape for recall. Data Read from Pool Computed by VEHSTATS using the above field and dividing by the number of seconds in the interval. Rate of compressed data written to physical tape from the disk cache for pre-migrations. Data Written to Pool Computed by VEHSTATS using the above field and dividing by the number of seconds in the interval. Depth of the outgoing copy queue (compressed data). Awaiting Replication to available Clusters Queue GiB_to Recv Write Throt Impac% Copy Throt Impac% Avg msec DCThrt MiB/s To_TVC RMT_WR Divided by 1000 to convert MiB to GiB Hnode Grid Grid Depth of the incoming copy queue Data to Copy Divided by 1000 to convert MiB to GiB Hnode HSM HSM-Cache The Host Write Throttle Impact Percentage. Computed by VEHSTATS using: Percent Host Write Throttle Average Host Write Throttle Equation is shown at bottom of table. Hnode HSM HSM-Cache The outgoing copy throttle impact percentage. Computed by VEHSTATS using: Percent Copy Throttle Average Copy Throttle Equation is shown at bottom of table. Hnode HSM HSM-Cache The amount of Deferred Copy Throttle (DCT) applied. Average Deferred Copy Throttle Hnode Grid Grid-Cluster Data Transferred (compressed) into a Cluster s Cache from other Clusters as part of a Remote Write Operation. Page 25 of 65 Computed by VEHSTATS using the above field and dividing by the number of seconds in the interval.

26 MiB/s Fr_TVC RMT_RD Intvl Sec Hnode Grid Grid-Cluster Data Transferred from a Cluster s Cache To Other Clusters as part of a Remote Read operation. Computed by VEHSTATS using the above field and dividing by the number of seconds in the interval. - - The number of seconds in the reporting interval. (# 30 sec samples with throttling) * (avg throttle value) * (100 to express as %) %Relative Impact (%RLTV IMPAC) = (# 30 sec samples in interval) * (2 sec max value) Page 26 of 65

27 14 AVGRDST 1(C) IBM REPORT=AVGRDST (10025) QTR INTERVAL AVERAGE RECALL MOUNT PENDING DISTRIBITION RUN ON 10:17:20 AVG MPEND HOW INTVL INTVL READ ACCUM MISS INTERVAL MANY ACCUM ACCUM% MISS MISS ACCUM% < % % < % % < % % < % % < % % < % % < % % < % % < % % < % % < % % < % % < % % < % % < % % > 15 MIN % % AVGRDST - Average Recall Mount Pending Distribution Field name Record Name Name Description Header Related Fields AVG MPEND INTERVAL HOW MANY INTVL ACCUM INTVL ACCUM% READ MISS ACCUM MISS MISS ACCUM% Body Related Fields Hnode HSM HSM-Cache-Partition The CACHE_MIS AVG SECS value in H30TVC1 is used for the tabulation. The interval buckets range from <30 seconds to >15 minutes Hnode HSM HSM-Cache-Partition The CACHE_MIS NUM MNTS value in H30TVC1 is used for the tabulation. This column shows the number of cache miss mounts that fall into the interval. This is the accumulated number of intervals. VEHSTATS computes this value. This is the accumulated percent of the total number of recall mounts. VEHSTATS computes this value. Hnode Library Tape Device Number read misses during the interval Accumulated number of read misses Page 27 of 65 Accumulated percentage of all read misses

28 15 DAYSMRY Report Order Type 1(C) IBM REPORT=DAYSMRY( 13247) DAILY SUMMARY RUN ON 15:40:13 PAGE 1 GRID#=99110 DIST_LIB_ID= 0 VNODE_ID= 0 NODE_SERIAL=CL0H5233 VE_CODE_LEVEL= UTCMINUS=06 Type Sunday Monday Tuesday Wednesday Thursday Friday Saturday Week_ended Date EOI 12MAR MAR MAR MAR MAR MAR MAR2012 Code Level EOI Date EOI Code Level EOI DAYSMRY DAILY SUMMARY Field name Record Name Name Description Header Related Fields Indicates if the column is a daily summary (Sunday Saturday) or a weekly summary (Week_ended). This is the date of the day being reported or the last reporting day of the week that is being summed. Body Related Fields TS7700 CAPACITY TVC Size EOI Hnode HSM HSM Cache TVC Size Active LVols EOI Hnode Library Library - Pooling General Use Pool (GUP) Active GB EOI Hnode Library Library - Pooling General Use Pool (GUP) This in the TS7700 code level at the end of the day or the end of the last reporting day of the week being summed. Active Logical Volumes Computed by VEHSTATS by summing data from all 32 General Use Pools. Active Data Converted to GB by VEHSTATS Computed by VEHSTATS by summing data from all 32 General Use Pools. VIRTUAL MOUNTS Total Mnts SUM Hnode HSM HSM Cache Partition Computed by VEHSTATS from the three fields below. Scratch Mnts SUM Hnode HSM HSM Cache Partition Fast Ready Mounts Read Hit Mnts SUM Hnode HSM HSM Cache Partition Cache Hit Mounts Read Miss Mnts SUM Hnode HSM HSM Cache Partition Cache Miss Mounts Mount Hit % CALC Hnode HSM HSM Cache Partition Computed by VEHSTATS using (Fast Ready Mounts + Cache Hit Mounts) / (Fast Ready Mounts + Cache Hit Mounts + Cache Miss Mounts) Avg Mnt Sec WAVG Hnode HSM HSM Cache Partition Computed by VEHSTATS from the three fields below. Avg Scr Mt Sec WAVG Hnode HSM HSM Cache Partition Average Fast Ready Mount Time Avg Rd Hit Sec WAVG Hnode HSM HSM Cache Partition Average Cache Hit Mount Time Avg Rd Mis Sec WAVG Hnode HSM HSM Cache Partition Average Cache Miss Mount Time Max Virt Drvs MAX Maximum Virtual Devices Mounted Page 28 of 65

29 Avg Virt Drvs AVG PHYSICAL MOUNTS Phy Dev3592E Hnode Library Library Tape Device Phy Stg Mnts SUM Hnode Library Library Tape Device Phy Mig Mnts SUM Hnode Library Library Tape Device Phy Rcm Mnts SUM Hnode Library Library Tape Device Tot Phy Mnts SUM Hnode Library Library Tape Device Max Phy Mtime MAX Hnode Library Library Tape Device Avg Phy Mtime AVG>0 Hnode Library Library Tape Device Average Virtual Devices Mounted Device Class ID Physical Recall Mounts Physical Pre-Migrate Mounts Physical Reclaim Mounts Computed by VEHSTATS by summing the above 3 fields. Maximum Physical Mount Time Average Physical Mount Time. VEHSTATS does not count the intervals without any mounted devices when computing the average. Maximum Physical Devices Mounted Max Phy Mntd MAX Hnode Library Library Tape Device Avg Phy Mntd AVG>0 Hnode Library Library Tape Device Average Physical Devices Mounted HOST DATA TRANSFER (UNCOMPRESSED) GiB Read SUM Vnode Adapter Vnode Adapter-Port Bytes Read by the Channel Converted to GiB by VEHSTATS GiB Write SUM Vnode Adapter Vnode Adapter-Port Bytes Written by the Channel Converted to GiB by VEHSTATS Total GiB Xfer SUM Vnode Adapter Vnode Adapter-Port Bytes Read by the Channel + Bytes Written by the Channel. Computed by VEHSTATS by summing the two fields. Converted to GiB by VEHSTATS Max QtrRd MB/s MAX Vnode Adapter Vnode Adapter-Port Bytes Read by the Channel - Computed by VEHSTATS from the 15 minute (quarter hour) intervals. Converted to MB/s by VEHSTATS Max QtrWr MB/s MAX Vnode Adapter Vnode Adapter-Port Bytes Written by the Channel Computed by VEHSTATS from the 15 minute (quarter hour) intervals. Converted to MB/s by VEHSTATS Max Qtr MB/s MAX Vnode Adapter Vnode Adapter-Port Bytes Read by the Channel + Bytes Written by the Channel. Computed by VEHSTATS from the 15 minute (quarter hour) intervals. Converted to MB/s by VEHSTATS Average MB/s AVG Vnode Adapter Vnode Adapter-Port Bytes Read by the Channel + Bytes Written by the Channel. Converted to MB/s by VEHSTATS Page 29 of 65

30 WrtThrotImpac% AVG Hnode HSM HSM Cache Computed by VEHSTATS using: Percent Host Write Throttle Average Host Write Throttle Equation is shown at bottom of table. CpyThrotImpac% AVG Hnode HSM HSM Cache Computed by VEHSTATS using: Percent Copy Throttle Average Copy Throttle Equation is shown at bottom of table. DATA COMPRESSION Read Comp AVG Vnode Adapter Vnode Adapter-Port Average read compression ratio. Computed by VEHSTATS using Bytes Read from Virtual Devices and Bytes Read by the Channel. Write Comp AVG Vnode Adapter Vnode Adapter-Port Average write compression ratio. Computed by VEHSTATS using Bytes Written to Virtual Devices and Bytes Total Comp AVG Written by the Channel. Vnode Adapter Vnode Adapter-Port Average read/write compression ratio. Computed by VEHSTATS using Bytes Read from Virtual Devices, Bytes Written to Virtual Devices, Bytes Read by the Channel, and Bytes Written by the Channel. PERFORMANCE BY PG PG0 VV in TVC EOI Hnode HSM HSM Cache Partition PG1 VV in TVC EOI Hnode HSM HSM Cache Partition PG0 GiB in TVC EOI Hnode HSM HSM Cache Partition PG1 GiB in TVC EOI Hnode HSM HSM Cache Partition PG0 MiB to MIG EOI Hnode HSM HSM Cache Partition PG1 MiB to MIG EOI Hnode HSM HSM Cache Partition PG0 MiB to CPY EOI Hnode HSM HSM Cache Partition PG1 MiB to CPY EOI Hnode HSM HSM Cache Partition All MiB to Mig EOI Hnode HSM HSM Cache Partition All MiB to Mig MAX Hnode HSM HSM Cache Partition All MiB to Cpy EOI Hnode HSM HSM Cache Partition Virtual Volumes in Cache Virtual Volumes in Cache Data Resident in Cache Converted to GiB by VEHSTATS Data Resident in Cache Converted to GiB by VEHSTATS Unmigrated Data Unmigrated Data Awaiting Replication to available Clusters Awaiting Replication to available Clusters Unmigrated Data Total computed by VEHSTATS Unmigrated Data Data Total computed by VEHSTATS Awaiting Replication to available Data Total computed by VEHSTATS Clusters Page 30 of 65

31 All MiB to Cpy MAX Hnode HSM HSM Cache Partition VOLUMES PURGED BY PG PG0 4HR VV MIG EOI Hnode HSM HSM Cache Partition PG1 4HR VV MIG EOI Hnode HSM HSM Cache Partition PG0 48H VV MIG EOI Hnode HSM HSM Cache Partition PG1 48H VV MIG EOI Hnode HSM HSM Cache Partition PG0 35D VV MIG EOI Hnode HSM HSM Cache Partition PG1 35D VV MIG EOI Hnode HSM HSM Cache Partition RESIDENCY TIME BY PG PG0 4HR AV MIN EOI Hnode HSM HSM Cache Partition PG1 4HR AV MIN EOI Hnode HSM HSM Cache Partition PG0 48H AV MIN EOI Hnode HSM HSM Cache Partition PG1 48H AV MIN EOI Hnode HSM HSM Cache Partition PG0 35D AV MIN EOI Hnode HSM HSM Cache Partition PG1 35D AV MIN EOI Hnode HSM HSM Cache Partition BLOCKS TRANSFERRED BlkSz LE 2K SUM BlkSz LE 4K SUM BlkSz LE 8K SUM BlkSz LE 16K SUM BlkSz LE 32K SUM BlkSz LE 64K SUM BlkSz GT 64K SUM EXPORT/IMPORT ACTIVITY Phy Vols Imp SUM Phy Vols Exp SUM Virt Vols Imp SUM Awaiting Replication to available Clusters Data Total computed by VEHSTATS Volumes Migrated Last 4 Hours Volumes Migrated Last 4 Hours Volumes Migrated Last 48 Hours Volumes Migrated Last 48 Hours Volumes Migrated Last 35 Days Volumes Migrated Last 35 Days 4 Hour Average Cache Age 4 Hour Average Cache Age 48 Hour Average Cache Age 48 Hour Average Cache Age 35 Day Average Cache Age 35 Day Average Cache Age Channel Blocks Written byte range Channel Blocks Written byte range Channel Blocks Written byte range Channel Blocks Written byte range Channel Blocks Written byte range Channel Blocks Written byte range Channel Blocks Written above byte range Physical Volumes Imported Physical Volumes Exported Logical Volumes Imported Page 31 of 65

32 Virt Vols Exp SUM Logical Volumes Exported MiB Data Imp SUM Amount of data imported MiB Data Exp SUM Amount of data exported GRID COPY RECEIVER SNAPSHOT Av DEF Min EOI Hnode Grid Grid Average Deferred Queue Age Value at the end of the reporting interval. Av RUN Min EOI Hnode Grid Grid Average Immediate Queue Age Value at the end of the reporting interval. VV to Recv EOI Hnode Grid Grid Logical Volumes for Copy Value at the end of the reporting interval. MiB to Recv EOI Hnode Grid Grid Data to Copy Value at the end of the reporting interval. Max Av DEF Min MAX Hnode Grid Grid Average Deferred Queue Age Maximum from the reporting Max Av RUN Min MAX Hnode Grid Grid Average Immediate Queue Age Maximum from the reporting Max VV to copy MAX Hnode Grid Grid Logical Volumes for Copy Maximum Max MiB to copy MAX for the reporting Hnode Grid Grid Data to Copy Maximum from the reporting GRID COPY PERFORMANCE CLUSTER x COPIES MiB x y Copy SUM Hnode Grid Grid-Cluster Data Transferred From a Cluster s Cache To Other Clusters as part of a Copy Operation Avg x y MB/s AVG Hnode Grid Grid-Cluster Data Transferred From a Cluster s Cache To Other Clusters as part of a Copy Operation. Computed by VEHSTATS. Max x y MB/s MAX Hnode Grid Grid-Cluster Data Transferred From a Cluster s Cache To Other Clusters as part of a Copy Operation. Computed by VEHSTATS. MiB S x Recv SUM Hnode Grid Grid-Cluster Data Transferred into a Cluster s Cache from other Clusters as part of a Copy Operation COMMON SCRATCH POOL MEDIA CSPMedx 3592xx EOI Hnode Library Library - Pooling Common Scratch Pool (CSP) OVERALL CARTRIDGE MEDIA PriMedx 3592xx EOI Hnode Library Library - Pooling GUP - ScrMedx 3592xx EOI Hnode Library Library - Pooling GUP - Page 32 of 65 Physical Count One entry for each type of media in the pool. The x and xx values will reflect the media type. Private Volume Count Computed by VEHSTATS by summing all of the General Use Pool data. Scratch Volume Count Computed by VEHSTATS by summing all of the General Use Pool data. USAGE BY POOL POOL xx Hnode Library A set for each of the 32 general use pools is available

33 POOL xx 3592Jx Hnode Library Library - Pooling GUP - POOL xx ACT VV EOI Hnode Library Library - Pooling General Use Pool (GUP) POOL xx ACT GB EOI Hnode Library Library - Pooling General Use Pool (GUP) POOL xx # PRIV EOI Hnode Library Library - Pooling GUP - POOL xx # SRCH EOI Hnode Library Library - Pooling GUP - POOL xx GB WRT SUM Hnode Library Library - Pooling GUP - POOL xx GB RD SUM Hnode Library Library - Pooling GUP - Physical Identifiers Active Logical Volumes Active Data Converted to GB by VEHSTATS Private Volume Count Scratch Volume Count Data Written to Pool Converted to GB by VEHSTATS Data Read from Pool Converted to GB by VEHSTATS (# 30 sec samples with throttling) * (avg throttle value) * (100 to express as %) %Relative Impact (%RLTV IMPAC) = (# 30 sec samples in interval) * (2 sec max value) Page 33 of 65

34 16 DAYSMRY - Alphabetical Order Code Level EOI Date EOI Type DAYSMRY DAILY SUMMARY Alphabetical Order Field name Record Name Name Description Header Related Fields This in the TS7700 code level at the end of the day or the end of the last reporting day of the week being summed. This is the date of the day being reported or the last reporting day of the week that is being summed. Body Related Fields Active GBs EOI Hnode Library Library - Pooling General Use Pool (GUP) Active LVols EOI Hnode Library Library - Pooling General Use Pool (GUP) All MiB to Cpy EOI Hnode HSM HSM Cache Partition All MiB to Cpy MAX Hnode HSM HSM Cache Partition Page 34 of 65 Indicates if the column is a daily summary (Sunday Saturday) or a weekly summary (Week_ended). Active Data Converted to GB by VEHSTATS Computed by VEHSTATS by summing data from all 32 General Use Pools. Active Logical Volumes Computed by VEHSTATS by summing data from all 32 General Use Pools. Awaiting Replication to available Data Total computed by VEHSTATS Clusters Awaiting Replication to available Clusters Data Total computed by VEHSTATS Unmigrated Data Total computed by All MiB to Mig EOI Hnode HSM HSM Cache Partition VEHSTATS All MiB to Mig MAX Hnode HSM HSM Cache Partition Unmigrated Data Data Total computed by VEHSTATS Av DEF Min EOI Hnode Grid Grid Average Deferred Queue Age Value at the end of the reporting interval. Av RUN Min EOI Hnode Grid Grid Average Immediate Queue Age Value at the end of the reporting interval. Average MB/s AVG Vnode Adapter Vnode Adapter-Port Bytes Read by the Channel + Bytes Written by the Channel. Converted to MB/s by VEHSTATS Avg Mnt Sec WAVG Hnode HSM HSM Cache Partition Computed by VEHSTATS from the Avg Scr Mt Sec, Avg Rd Hit Sec, and Avg Rd Mis Sec fields. Avg Phy Mntd AVG>0 Hnode Library Library Tape Device Avg Phy Mtime AVG>0 Hnode Library Library Tape Device Average Physical Devices Mounted Average Physical Mount Time. VEHSTATS does not count the intervals without any mounted devices when computing the average.

35 Avg Rd Hit Sec WAVG Hnode HSM HSM Cache Partition Average Cache Hit Mount Time Avg Rd Mis Sec WAVG Hnode HSM HSM Cache Partition Average Cache Miss Mount Time Avg Scr Mt Sec WAVG Hnode HSM HSM Cache Partition Average Fast Ready Mount Time Avg Virt Drvs AVG Average Virtual Devices Mounted Avg x y MB/s AVG Hnode Grid Grid-Cluster Data Transferred From a Cluster s Cache To Other Clusters as part of a BlkSz GT 64K SUM Copy Operation. Computed by VEHSTATS. Channel Blocks Written above byte range BlkSz LE 16K SUM Channel Blocks Written byte range BlkSz LE 2K SUM Channel Blocks Written byte range BlkSz LE 32K SUM Channel Blocks Written byte range BlkSz LE 4K SUM Channel Blocks Written byte range BlkSz LE 64K SUM Channel Blocks Written byte range BlkSz LE 8K SUM Channel Blocks Written byte range CpyThrotImpac% AVG Hnode HSM HSM Cache Computed by VEHSTATS using: Percent Copy Throttle Average Copy Throttle Equation is shown at bottom of table. CSPMedx 3592xx EOI Hnode Library Library - Pooling Common Scratch Pool (CSP) Physical Count One entry for each type of media in the pool. The x and xx values will reflect the media type. GiB Read SUM Vnode Adapter Vnode Adapter-Port Bytes Read by the Channel Converted to GiB by VEHSTATS GiB Write SUM Vnode Adapter Vnode Adapter-Port Bytes Written by the Channel Converted to GiB by VEHSTATS Max Av DEF Min MAX Hnode Grid Grid Average Deferred Queue Age Maximum from the reporting Max Av RUN Min MAX Hnode Grid Grid Average Immediate Queue Age Maximum Max MiB to copy MAX from the reporting Hnode Grid Grid Data to Copy Maximum from the reporting Max Phy Mntd MAX Hnode Library Library Tape Device Maximum Physical Devices Mounted Max Phy Mtime MAX Hnode Library Library Tape Device Maximum Physical Mount Time Max Qtr MB/s MAX Vnode Adapter Vnode Adapter-Port Bytes Read by the Channel + Bytes Written by the Channel. Computed by VEHSTATS from the 15 minute (quarter hour) intervals. Converted to MB/s by VEHSTATS Page 35 of 65

36 Max QtrRd MB/s MAX Vnode Adapter Vnode Adapter-Port Bytes Read by the Channel - Computed by VEHSTATS from the 15 minute (quarter hour) intervals. Converted to MB/s by VEHSTATS Max QtrWr MB/s MAX Vnode Adapter Vnode Adapter-Port Bytes Written by the Channel Computed by VEHSTATS from the 15 minute (quarter hour) intervals. Converted to MiB/s by VEHSTATS Max Virt Drvs MAX Maximum Virtual Devices Mounted Max VV to copy MAX Hnode Grid Grid Logical Volumes for Copy Maximum for the reporting Max x y MB/s MAX Hnode Grid Grid-Cluster Data Transferred From a Cluster s Cache To Other Clusters as part of a MiB Data Exp SUM Copy Operation. Computed by VEHSTATS. Amount of data exported MiB Data Imp SUM Expor/Import Amount of data imported MiB S x Recv SUM Hnode Grid Grid-Cluster Data Transferred into a Cluster s Cache from other Clusters as part of a Copy Operation. MiB to Recv EOI Hnode Grid Grid Data to Copy Value at the end of the reporting interval. MiB x y Copy SUM Hnode Grid Grid-Cluster Data Transferred From a Cluster s Cache To Other Clusters as part of a Copy Operation. Mount Hit % CALC Hnode HSM HSM Cache Partition Computed by VEHSTATS using (Fast Ready Mounts + Cache Hit Mounts) / (Fast Ready Mounts + Cache Hit Mounts + Cache Miss Mounts) PG0 35D AV MIN EOI Hnode HSM HSM Cache Partition 35 Day Average Cache Age PG0 35D VV MIG EOI Hnode HSM HSM Cache Partition Volumes Migrated Last 35 Days PG0 48H AV MIN EOI Hnode HSM HSM Cache Partition 48 Hour Average Cache Age PG0 48H VV MIG EOI Hnode HSM HSM Cache Partition Volumes Migrated Last 48 Hours PG0 4HR AV MIN EOI Hnode HSM HSM Cache Partition 4 Hour Average Cache Age PG0 4HR VV MIG EOI Hnode HSM HSM Cache Partition Volumes Migrated Last 4 Hours PG0 GiB in TVC EOI Hnode HSM HSM Cache Partition Data Resident in Cache Converted to GiB by VEHSTATS PG0 MiB to CPY EOI Hnode HSM HSM Cache Partition Awaiting Replication to available Clusters PG0 MiB to MIG EOI Hnode HSM HSM Cache Partition Unmigrated Data PG0 VV in TVC EOI Hnode HSM HSM Cache Partition Virtual Volumes in Cache Page 36 of 65

IBM Virtualization Engine TS7700 Series Statistical Data Format White Paper Version 1.6

IBM Virtualization Engine TS7700 Series Statistical Data Format White Paper Version 1.6 November 2009 IBM Virtualization Engine TS7700 Series Statistical Data Format White Paper Version 1.6 Tucson Tape Development Tucson, Arizona Target Audience This document provides the definition of the

More information

TS7700 Technical Update TS7720 Tape Attach Deep Dive

TS7700 Technical Update TS7720 Tape Attach Deep Dive TS7700 Technical Update TS7720 Tape Attach Deep Dive Ralph Beeston TS7700 Architecture IBM Session objectives Brief Overview TS7700 Quick background of TS7700 TS7720T Overview TS7720T Deep Dive TS7720T

More information

TS7700 Technical Update What s that I hear about R3.2?

TS7700 Technical Update What s that I hear about R3.2? TS7700 Technical Update What s that I hear about R3.2? Ralph Beeston TS7700 Architecture IBM Session objectives Brief Overview TS7700 TS7700 Release 3.2 TS7720T TS7720 Tape Attach The Basics Partitions

More information

IBM Virtualization Engine TS7700 Series Best Practices. TS7700 Hybrid Grid Usage V1.1

IBM Virtualization Engine TS7700 Series Best Practices. TS7700 Hybrid Grid Usage V1.1 IBM Virtualization Engine TS7700 Series Best Practices TS7700 Hybrid Grid Usage V1.1 William Travis billyt@us.ibm.com STSM TS7700 Development Jim Fisher fisherja@us.ibm.com IBM Advanced Technical Skills

More information

IBM Virtualization Engine TS7720 and TS7740 Releases 1.6, 1.7, 2.0, 2.1 and 2.1 PGA2 Performance White Paper Version 2.0

IBM Virtualization Engine TS7720 and TS7740 Releases 1.6, 1.7, 2.0, 2.1 and 2.1 PGA2 Performance White Paper Version 2.0 IBM System Storage July 3, 212 IBM Virtualization Engine TS772 and TS774 Releases 1.6, 1.7, 2., 2.1 and 2.1 PGA2 Performance White Paper Version 2. By Khanh Ly Tape Performance IBM Tucson Page 2 Table

More information

IBM Virtualization Engine TS7700 Series Best Practices. Cache Management in the TS7720 V1.6

IBM Virtualization Engine TS7700 Series Best Practices. Cache Management in the TS7720 V1.6 IBM Virtualization Engine TS7700 Series Best Practices Cache Management in the TS7720 V1.6 Jim Fisher fisherja@us.ibm.com IBM Advanced Technical Skills North America Page 1 of 47 1 Introduction... 3 1.1

More information

IBM TS7700 Series Best Practices. Flash Copy for Disaster Recovery Testing V1.1.1

IBM TS7700 Series Best Practices. Flash Copy for Disaster Recovery Testing V1.1.1 7/9/2015 IBM TS7700 Series Best Practices Flash Copy for Disaster Recovery Testing V1.1.1 Norie Iwasaki, norie@jp.ibm.com IBM STG, Storage Systems Development, IBM Japan Ltd. Katsuyoshi Katori, katori@jp.ibm.com

More information

Collecting Hydra Statistics

Collecting Hydra Statistics Collecting Hydra Statistics Fabio Massimo Ottaviani EPV Technologies White paper 1 Overview The IBM Virtualization Engine TS7700, code named Hydra, is the new generation of tape virtualization solution

More information

IBM Virtualization Engine TS7700 Series Copy Export Function User's Guide Version 2.1.5

IBM Virtualization Engine TS7700 Series Copy Export Function User's Guide Version 2.1.5 May 2013 IBM Virtualization Engine TS7700 Series Copy Export Function User's Guide Version 2.1.5 Kerri Shotwell Senior Design Engineer Tucson, Arizona Copyright 2007, 2009, 2011, 2012 IBM Corporation Introduction...

More information

Introduction and Planning Guide

Introduction and Planning Guide IBM TS7700 Series IBM TS7700, TS7700 Cache Controller, and TS7700 Cache Drawer Introduction and Planning Guide GA32-0567-19 Note Before using this information and the product it supports, read the information

More information

TS7720 Implementation in a 4-way Grid

TS7720 Implementation in a 4-way Grid TS7720 Implementation in a 4-way Grid Rick Adams Fidelity Investments Monday August 6, 2012 Session Number 11492 Agenda Introduction TS7720 Components How a Grid works Planning Considerations TS7720 Setup

More information

Accelerate with IBM Storage: TS7700 Back to Basics Concepts and Operations

Accelerate with IBM Storage: TS7700 Back to Basics Concepts and Operations Accelerate with IBM Storage: TS7700 Back to Basics Concepts and Operations Presenter Bill Danz Panelists Ben Smith Bob Sommer Carl Reasoner Randy Hensley Copyright IBM Corporation 2018. Accelerate with

More information

Exam Actual. Higher Quality. Better Service! QUESTION & ANSWER

Exam Actual. Higher Quality. Better Service! QUESTION & ANSWER Higher Quality Better Service! Exam Actual QUESTION & ANSWER Accurate study guides, High passing rate! Exam Actual provides update free of charge in one year! http://www.examactual.com Exam : 000-207 Title

More information

IBM Virtualization Engine TS7700 Series Best Practices. Copy Consistency Points V1.2

IBM Virtualization Engine TS7700 Series Best Practices. Copy Consistency Points V1.2 IBM Virtualization Engine TS7700 Series Best Practices Copy Consistency Points V1.2 Takeshi Nohta nohta@jp.ibm.com RMSS/SSD-VTS - Japan Target Audience This document provides the Best Practices for TS7700

More information

Introduction and Planning Guide

Introduction and Planning Guide IBMTS7700Series IBM TS7700, TS7700 Cache Controller, and TS7700 Cache Drawer Introduction and Planning Guide GA32-0567-21 IBMTS7700Series IBM TS7700, TS7700 Cache Controller, and TS7700 Cache Drawer Introduction

More information

IBM Virtualization Engine TS7720 and TS7740 Release 3.0 Performance White Paper - Version 2

IBM Virtualization Engine TS7720 and TS7740 Release 3.0 Performance White Paper - Version 2 IBM System Storage March 27, 213 IBM Virtualization Engine TS772 and TS774 Release 3. Performance White Paper - Version 2 By Khanh Ly and Luis Fernando Lopez Gonzalez Tape Performance IBM Tucson Page 2

More information

IBM TS7760 and TS7760T Release 4.0 Performance White Paper Version 2.0

IBM TS7760 and TS7760T Release 4.0 Performance White Paper Version 2.0 IBM System Storage September 26, 216 IBM TS776 and TS776T Release 4. Performance White Paper Version 2. By Khanh Ly Virtual Tape Performance IBM Tucson Copyright IBM Corporation Page 2 Table of Contents

More information

IBM TS7720, TS7720T, and TS7740 Release 3.2 Performance White Paper Version 2.0

IBM TS7720, TS7720T, and TS7740 Release 3.2 Performance White Paper Version 2.0 IBM System Storage May 7, 215 IBM TS772, TS772T, and TS774 Release 3.2 Performance White Paper Version 2. By Khanh Ly and Luis Fernando Lopez Gonzalez Tape Performance IBM Tucson Copyright IBM Corporation

More information

DLm TM TRANSFORMS MAINFRAME TAPE! WHY DELL EMC DISK LIBRARY FOR MAINFRAME?

DLm TM TRANSFORMS MAINFRAME TAPE! WHY DELL EMC DISK LIBRARY FOR MAINFRAME? DLm TM TRANSFORMS MAINFRAME TAPE! WHY DELL EMC DISK LIBRARY FOR MAINFRAME? The Business Value of Disk Library for mainframe OVERVIEW OF THE BENEFITS DLM VERSION 5.0 DLm is designed to reduce capital and

More information

IBM Virtualization Engine TS7700 Series Best Practices. TPF Host and TS7700 IBM Virtualization Engine V1.1

IBM Virtualization Engine TS7700 Series Best Practices. TPF Host and TS7700 IBM Virtualization Engine V1.1 IBM Virtualization Engine TS7700 Series Best Practices TPF Host and TS7700 IBM Virtualization Engine V1.1 Gerard Kimbuende gkimbue@us.ibm.com TS7700 FVT Software Engineer John Tarby jtarby@us.ibm.com TPF

More information

EMC Disk Library Automated Tape Caching Feature

EMC Disk Library Automated Tape Caching Feature EMC Disk Library Automated Tape Caching Feature A Detailed Review Abstract This white paper details the EMC Disk Library configuration and best practices when using the EMC Disk Library Automated Tape

More information

Backup and Restore SOP FOR CONGO CLUSTER

Backup and Restore SOP FOR CONGO CLUSTER Backup and Restore SOP FOR CONGO CLUSTER IT Department, Congo Cluster Version 1.0 January 2008 1. Contents 1. CONTENTS... 2 2. INTRODUCTION TO BACKUPS... 3 2.1. Objectives... 3 2.2. Scope of the document...

More information

IBM TS7700 Series Grid Failover Scenarios Version 1.4

IBM TS7700 Series Grid Failover Scenarios Version 1.4 July 2016 IBM TS7700 Series Grid Failover Scenarios Version 1.4 TS7700 Development Team Katsuyoshi Katori Kohichi Masuda Takeshi Nohta Tokyo Lab, Japan System and Technology Lab Copyright 2006, 2013-2016

More information

Data Deduplication Makes It Practical to Replicate Your Tape Data for Disaster Recovery

Data Deduplication Makes It Practical to Replicate Your Tape Data for Disaster Recovery Data Deduplication Makes It Practical to Replicate Your Tape Data for Disaster Recovery Scott James VP Global Alliances Luminex Software, Inc. Randy Fleenor Worldwide Data Protection Management IBM Corporation

More information

IBM High End Taps Solutions Version 5. Download Full Version :

IBM High End Taps Solutions Version 5. Download Full Version : IBM 000-207 High End Taps Solutions Version 5 Download Full Version : http://killexams.com/pass4sure/exam-detail/000-207 QUESTION: 194 Which of the following is used in a System Managed Tape environment

More information

Monitoring Agent for Microsoft Hyper-V Server Fix Pack 12. Reference IBM

Monitoring Agent for Microsoft Hyper-V Server Fix Pack 12. Reference IBM Monitoring Agent for Microsoft Hyper-V Server 6.3.1 Fix Pack 12 Reference IBM Monitoring Agent for Microsoft Hyper-V Server 6.3.1 Fix Pack 12 Reference IBM Note Before using this information and the product

More information

Monitoring Agent for Unix OS Version Reference IBM

Monitoring Agent for Unix OS Version Reference IBM Monitoring Agent for Unix OS Version 6.3.5 Reference IBM Monitoring Agent for Unix OS Version 6.3.5 Reference IBM Note Before using this information and the product it supports, read the information in

More information

Payflow Implementer's Guide FAQs

Payflow Implementer's Guide FAQs Payflow Implementer's Guide FAQs FS-PF-FAQ-UG-201702--R016.00 Fairsail 2017. All rights reserved. This document contains information proprietary to Fairsail and may not be reproduced, disclosed, or used

More information

Technology Insight Series

Technology Insight Series IBM ProtecTIER Deduplication for z/os John Webster March 04, 2010 Technology Insight Series Evaluator Group Copyright 2010 Evaluator Group, Inc. All rights reserved. Announcement Summary The many data

More information

PRESENTATION TITLE GOES HERE

PRESENTATION TITLE GOES HERE Performance Basics PRESENTATION TITLE GOES HERE Leah Schoeb, Member of SNIA Technical Council SNIA EmeraldTM Training SNIA Emerald Power Efficiency Measurement Specification, for use in EPA ENERGY STAR

More information

Analyzing Hydra Historical Statistics Part 2

Analyzing Hydra Historical Statistics Part 2 Analyzing Hydra Hitorical Statitic Part Fabio Maimo Ottaviani EPV Technologie White paper 5 hnode HSM Hitorical Record The hnode i the hierarchical data torage management node and ha to perform all the

More information

EMC for Mainframe Tape on Disk Solutions

EMC for Mainframe Tape on Disk Solutions EMC for Mainframe Tape on Disk Solutions May 2012 zmainframe Never trust a computer you can lift! 1 EMC & Bus-Tech for Mainframe EMC supports mainframe systems since 1990 with first integrated cached disk

More information

ECS. Monitoring Guide. Version 3.2.x.x

ECS. Monitoring Guide. Version 3.2.x.x ECS Version 3.2.x.x Monitoring Guide 302-004-492 03 Copyright 2018 Dell Inc. or its subsidiaries. All rights reserved. Published June 2018 Dell believes the information in this publication is accurate

More information

IBM System Storage TS7740 Virtualization Engine now supports three cluster grids, Copy Export for standalone clusters, and other upgrades

IBM System Storage TS7740 Virtualization Engine now supports three cluster grids, Copy Export for standalone clusters, and other upgrades IBM United States Announcement 107-392, dated July 10, 2007 IBM System Storage TS7740 Virtualization Engine now supports three cluster grids, Copy Export for standalone clusters, and other upgrades Key

More information

Oracle StorageTek's VTCS DR Synchronization Feature

Oracle StorageTek's VTCS DR Synchronization Feature Oracle StorageTek's VTCS DR Synchronization Feature Irene Adler Oracle Corporation Thursday, August 9, 2012: 1:30pm-2:30pm Session Number 11984 Agenda 2 Tiered Storage Solutions with VSM s VTSS/VLE/Tape

More information

IBM IBM Open Systems Storage Solutions Version 4. Download Full Version :

IBM IBM Open Systems Storage Solutions Version 4. Download Full Version : IBM 000-742 IBM Open Systems Storage Solutions Version 4 Download Full Version : https://killexams.com/pass4sure/exam-detail/000-742 Answer: B QUESTION: 156 Given the configuration shown, which of the

More information

CS 261 Fall Mike Lam, Professor. Memory

CS 261 Fall Mike Lam, Professor. Memory CS 261 Fall 2016 Mike Lam, Professor Memory Topics Memory hierarchy overview Storage technologies SRAM DRAM PROM / flash Disk storage Tape and network storage I/O architecture Storage trends Latency comparisons

More information

Microsoft SQL Server Fix Pack 15. Reference IBM

Microsoft SQL Server Fix Pack 15. Reference IBM Microsoft SQL Server 6.3.1 Fix Pack 15 Reference IBM Microsoft SQL Server 6.3.1 Fix Pack 15 Reference IBM Note Before using this information and the product it supports, read the information in Notices

More information

INTEROPERABILITY OF AVAMAR AND DISKXTENDER FOR WINDOWS

INTEROPERABILITY OF AVAMAR AND DISKXTENDER FOR WINDOWS TECHNICAL NOTES INTEROPERABILITY OF AVAMAR AND DISKXTENDER FOR WINDOWS ALL PRODUCT VERSIONS TECHNICAL NOTE P/N 300-007-585 REV A03 AUGUST 24, 2009 Table of Contents Introduction......................................................

More information

Performance Sentry VM Provider Objects April 11, 2012

Performance Sentry VM Provider Objects April 11, 2012 Introduction This document describes the Performance Sentry VM (Sentry VM) Provider performance data objects defined using the VMware performance groups and counters. This version of Performance Sentry

More information

Introduction and Planning Guide

Introduction and Planning Guide IBM Virtualization Engine TS7700 Series Introduction and Planning Guide IBM Virtualization Engine TS7700, TS7700 Cache Controller, and TS7700 Cache Drawer Printed in U.S.A. GA32-0567-11 Note! Before using

More information

Paradigm Shifts in How Tape is Viewed and Being Used on the Mainframe

Paradigm Shifts in How Tape is Viewed and Being Used on the Mainframe Paradigm Shifts in How Tape is Viewed and Being Used on the Mainframe Ralph Armstrong EMC Corporation February 5, 2013 Session 13152 2 Conventional Outlook Mainframe Tape Use Cases BACKUP SPACE MGMT DATA

More information

IBM TS7700 grid solutions for business continuity

IBM TS7700 grid solutions for business continuity IBM grid solutions for business continuity Enhance data protection and business continuity for mainframe environments in the cloud era Highlights Help ensure business continuity with advanced features

More information

Private Swimming Lessons

Private Swimming Lessons Private Swimming Lessons Private Lessons Designed for participants who would like a 1:1 ratio. Participants will receive individual attention to improve their swimming technique and have the convenience

More information

Universal Storage Consistency of DASD and Virtual Tape

Universal Storage Consistency of DASD and Virtual Tape Universal Storage Consistency of DASD and Virtual Tape Jim Erdahl U.S.Bank August, 14, 2013 Session Number 13848 AGENDA Context mainframe tape and DLm Motivation for DLm8000 DLm8000 implementation GDDR

More information

IBM System Storage TS1130 Tape Drive Models E06 and other features enhance performance and capacity

IBM System Storage TS1130 Tape Drive Models E06 and other features enhance performance and capacity IBM Europe Announcement ZG08-0543, dated July 15, 2008 IBM System Storage TS1130 Tape Drive Models E06 and other features enhance performance and capacity Key prerequisites...2 Description...2 Product

More information

EMC Data Domain for Archiving Are You Kidding?

EMC Data Domain for Archiving Are You Kidding? EMC Data Domain for Archiving Are You Kidding? Bill Roth / Bob Spurzem EMC EMC 1 Agenda EMC Introduction Data Domain Enterprise Vault Integration Data Domain NetBackup Integration Q & A EMC 2 EMC Introduction

More information

Simple And Reliable End-To-End DR Testing With Virtual Tape

Simple And Reliable End-To-End DR Testing With Virtual Tape Simple And Reliable End-To-End DR Testing With Virtual Tape Jim Stout EMC Corporation August 9, 2012 Session Number 11769 Agenda Why Tape For Disaster Recovery The Evolution Of Disaster Recovery Testing

More information

DISK LIBRARY FOR MAINFRAME (DLM)

DISK LIBRARY FOR MAINFRAME (DLM) DISK LIBRARY FOR MAINFRAME (DLM) Cloud Storage for Data Protection and Long-Term Retention ABSTRACT Disk Library for mainframe (DLm) is Dell EMC s industry leading virtual tape library for IBM z Systems

More information

The Total Network Volume chart shows the total traffic volume for the group of elements in the report.

The Total Network Volume chart shows the total traffic volume for the group of elements in the report. Tjänst: Network Health Total Network Volume and Total Call Volume Charts Public The Total Network Volume chart shows the total traffic volume for the group of elements in the report. Chart Description

More information

TSM Studio Dataview's and Dataview Commands. TSM Studio

TSM Studio Dataview's and Dataview Commands. TSM Studio TSM Studio Dataview's and Dataview Commands TSM Studio 2.9.0.0 1 Table of Contents... 1 Commands Common to All Dataview's... 12 Automation... 14 Admin Schedules... 14 Admin Schedules Time of Day Diagram...

More information

Conditional Formatting

Conditional Formatting Microsoft Excel 2013: Part 5 Conditional Formatting, Viewing, Sorting, Filtering Data, Tables and Creating Custom Lists Conditional Formatting This command can give you a visual analysis of your raw data

More information

Disaster Recovery Workflow

Disaster Recovery Workflow CHAPTER 4 InMage CDP starts with the FX/VX agent, also known as "DataTap," which is used to monitor all writes to disk. A small amount of memory on the source machine is reserved by the DataTap (250MB).

More information

EMC DiskXtender for Windows and EMC RecoverPoint Interoperability

EMC DiskXtender for Windows and EMC RecoverPoint Interoperability Applied Technology Abstract This white paper explains how the combination of EMC DiskXtender for Windows and EMC RecoverPoint can be used to implement a solution that offers efficient storage management,

More information

IBM TS7700 Series Operator Informational Messages White Paper Version 4.1.1

IBM TS7700 Series Operator Informational Messages White Paper Version 4.1.1 Jul 6, 2017 IBM TS7700 Series Operator Informational Messages White Paper Version 4.1.1 Dante Pichardo Tucson Tape Development Tucson, Arizona Introduction During normal and exception processing within

More information

IBM TS7700 Series Operator Informational Messages White Paper Version 2.0.1

IBM TS7700 Series Operator Informational Messages White Paper Version 2.0.1 Apr 13, 215 IBM TS77 Series Operator Informational Messages White Paper Version 2..1 Dante Pichardo Tucson Tape Development Tucson, Arizona Apr 13, 215 Introduction During normal and exception processing

More information

AIMMS Function Reference - Date Time Related Identifiers

AIMMS Function Reference - Date Time Related Identifiers AIMMS Function Reference - Date Time Related Identifiers This file contains only one chapter of the book. For a free download of the complete book in pdf format, please visit www.aimms.com Aimms 3.13 Date-Time

More information

IBM TS7720 supports physical tape attachment

IBM TS7720 supports physical tape attachment IBM United States Hardware Announcement 114-167, dated October 6, 2014 IBM TS7720 supports physical tape attachment Table of contents 1 Overview 5 Product number 1 Key prerequisites 6 Publications 1 Planned

More information

Oracle Database 11g Release 2 for SAP Advanced Compression. Christoph Kersten Oracle Database for SAP Global Technology Center (Walldorf, Germany)

Oracle Database 11g Release 2 for SAP Advanced Compression. Christoph Kersten Oracle Database for SAP Global Technology Center (Walldorf, Germany) Oracle Database 11g Release 2 for SAP Advanced Compression Christoph Kersten Oracle Database for SAP Global Technology Center (Walldorf, Germany) Implicit Compression Efficient Use

More information

Rdb features for high performance application

Rdb features for high performance application Rdb features for high performance application Philippe Vigier Oracle New England Development Center Copyright 2001, 2003 Oracle Corporation Oracle Rdb Buffer Management 1 Use Global Buffers Use Fast Commit

More information

Peak Season Metrics Summary

Peak Season Metrics Summary Peak Season Metrics Summary Week Ending Week Ending Number Date Number Date 1 6-Jan-18 27 7-Jul-18 2 13-Jan-18 28 14-Jul-18 3 2-Jan-18 29 21-Jul-18 4 27-Jan-18 3 28-Jul-18 Current 5 3-Feb-18 Thursday,

More information

Peak Season Metrics Summary

Peak Season Metrics Summary Peak Season Metrics Summary Week Ending Week Ending Number Date Number Date 1 6-Jan-18 27 7-Jul-18 Current 2 13-Jan-18 28 14-Jul-18 3 2-Jan-18 29 21-Jul-18 4 27-Jan-18 3 28-Jul-18 5 3-Feb-18 Thursday,

More information

Accelerating Spectrum Scale with a Intelligent IO Manager

Accelerating Spectrum Scale with a Intelligent IO Manager Accelerating Spectrum Scale with a Intelligent IO Manager Ray Coetzee Pre-Sales Architect Seagate Systems Group, HPC 2017 Seagate, Inc. All Rights Reserved. 1 ClusterStor: Lustre, Spectrum Scale and Object

More information

EMC Unisphere for VMAX Database Storage Analyzer

EMC Unisphere for VMAX Database Storage Analyzer EMC Unisphere for VMAX Database Storage Analyzer Version 8.0.3 Online Help (PDF version) Copyright 2014-2015 EMC Corporation. All rights reserved. Published in USA. Published June, 2015 EMC believes the

More information

Peak Season Metrics Summary

Peak Season Metrics Summary Peak Season Metrics Summary Week Ending Week Ending Number Date Number Date 1 6-Jan-18 27 7-Jul-18 2 13-Jan-18 28 14-Jul-18 Current 3 2-Jan-18 29 21-Jul-18 4 27-Jan-18 3 28-Jul-18 5 3-Feb-18 Thursday,

More information

AIX Power System Assessment

AIX Power System Assessment When conducting an AIX Power system assessment, we look at how CPU, Memory and Disk I/O are being consumed. This can assist in determining whether or not the system is sufficiently sized. An undersized

More information

Single-pass restore after a media failure. Caetano Sauer, Goetz Graefe, Theo Härder

Single-pass restore after a media failure. Caetano Sauer, Goetz Graefe, Theo Härder Single-pass restore after a media failure Caetano Sauer, Goetz Graefe, Theo Härder 20% of drives fail after 4 years High failure rate on first year (factory defects) Expectation of 50% for 6 years https://www.backblaze.com/blog/how-long-do-disk-drives-last/

More information

Architecting For Availability, Performance & Networking With ScaleIO

Architecting For Availability, Performance & Networking With ScaleIO Architecting For Availability, Performance & Networking With ScaleIO Performance is a set of bottlenecks Performance related components:, Operating Systems Network Drives Performance features: Caching

More information

Mass Storage at the PSC

Mass Storage at the PSC Phil Andrews Manager, Data Intensive Systems Mass Storage at the PSC Pittsburgh Supercomputing Center, 4400 Fifth Ave, Pittsburgh Pa 15213, USA EMail:andrews@psc.edu Last modified: Mon May 12 18:03:43

More information

TSM Node Replication Deep Dive and Best Practices

TSM Node Replication Deep Dive and Best Practices TSM Node Replication Deep Dive and Best Practices Matt Anglin TSM Server Development Abstract This session will provide a detailed look at the node replication feature of TSM. It will provide an overview

More information

Using Oracle STATSPACK to assist with Application Performance Tuning

Using Oracle STATSPACK to assist with Application Performance Tuning Using Oracle STATSPACK to assist with Application Performance Tuning Scenario You are experiencing periodic performance problems with an application that uses a back-end Oracle database. Solution Introduction

More information

Setting Up the DR Series System on Veeam

Setting Up the DR Series System on Veeam Setting Up the DR Series System on Veeam Quest Engineering June 2017 A Quest Technical White Paper Revisions Date January 2014 May 2014 July 2014 April 2015 June 2015 November 2015 April 2016 Description

More information

Hostname System Configuration Documentation

Hostname System Configuration Documentation Hostname System Configuration Documentation Version 0.0 25-Jan-18 Delivered January 25, 2018 Version 0.0 By: Gary Neshanian (consultant) Nish Consulting 2336 Elden Ave., Suite G Costa Mesa, CA 92627 Phone

More information

View a Students Schedule Through Student Services Trigger:

View a Students Schedule Through Student Services Trigger: Department Responsibility/Role File Name Version Document Generation Date 6/10/2007 Date Modified 6/10/2007 Last Changed by Status View a Students Schedule Through Student Services_BUSPROC View a Students

More information

Storage Technology Requirements of the NCAR Mass Storage System

Storage Technology Requirements of the NCAR Mass Storage System Storage Technology Requirements of the NCAR Mass Storage System Gene Harano National Center for Atmospheric Research (NCAR) 1850 Table Mesa Dr. Boulder, CO 80303 Phone: +1-303-497-1203; FAX: +1-303-497-1848

More information

Storage for HPC, HPDA and Machine Learning (ML)

Storage for HPC, HPDA and Machine Learning (ML) for HPC, HPDA and Machine Learning (ML) Frank Kraemer, IBM Systems Architect mailto:kraemerf@de.ibm.com IBM Data Management for Autonomous Driving (AD) significantly increase development efficiency by

More information

Configuring IBM Spectrum Protect for IBM Spectrum Scale Active File Management

Configuring IBM Spectrum Protect for IBM Spectrum Scale Active File Management IBM Spectrum Protect Configuring IBM Spectrum Protect for IBM Spectrum Scale Active File Management Document version 1.4 Dominic Müller-Wicke IBM Spectrum Protect Development Nils Haustein EMEA Storage

More information

DAS LRS Monthly Service Report

DAS LRS Monthly Service Report DAS LRS Monthly Service Report Customer Service Manager : Diploma Aggregation Service : Daniel Ward Project/document reference : DAS LRS 2010-12 Issue : 1.0 Issue date : 17 th January 2011 Reporting Period

More information

IBM Spectrum Scale Archiving Policies

IBM Spectrum Scale Archiving Policies IBM Spectrum Scale Archiving Policies An introduction to GPFS policies for file archiving with Linear Tape File System Enterprise Edition Version 4 Nils Haustein Executive IT Specialist EMEA Storage Competence

More information

White Paper NetVault Backup Greatly Reduces the Cost of Backup Using All-Flash Arrays with the Latest LTO Ultrium Technology

White Paper NetVault Backup Greatly Reduces the Cost of Backup Using All-Flash Arrays with the Latest LTO Ultrium Technology White Paper NetVault Backup Greatly Reduces the Cost of Backup Using All-Flash Arrays with the Latest LTO Ultrium Technology Unlimited Backup Capacity and Number of Generations Adoption of all-flash arrays

More information

CS 261 Fall Mike Lam, Professor. Memory

CS 261 Fall Mike Lam, Professor. Memory CS 261 Fall 2017 Mike Lam, Professor Memory Topics Memory hierarchy overview Storage technologies I/O architecture Storage trends Latency comparisons Locality Memory Until now, we've referred to memory

More information

HP Designing and Implementing HP Enterprise Backup Solutions. Download Full Version :

HP Designing and Implementing HP Enterprise Backup Solutions. Download Full Version : HP HP0-771 Designing and Implementing HP Enterprise Backup Solutions Download Full Version : http://killexams.com/pass4sure/exam-detail/hp0-771 A. copy backup B. normal backup C. differential backup D.

More information

Peak Season Metrics Summary

Peak Season Metrics Summary Peak Season Metrics Summary Week Ending Week Ending Number Date Number Date 1 6-Jan-18 27 7-Jul-18 2 13-Jan-18 28 14-Jul-18 3 2-Jan-18 29 21-Jul-18 4 27-Jan-18 3 28-Jul-18 5 3-Feb-18 Thursday, 21-Jun-18

More information

IBM Spectrum Scale Archiving Policies

IBM Spectrum Scale Archiving Policies IBM Spectrum Scale Archiving Policies An introduction to GPFS policies for file archiving with Spectrum Archive Enterprise Edition Version 8 (07/31/2017) Nils Haustein Executive IT Specialist EMEA Storage

More information

Independent Electricity System Operator Rapid Migration of Big Data - Oracle DB using IBM Enterprise Storage Tools

Independent Electricity System Operator Rapid Migration of Big Data - Oracle DB using IBM Enterprise Storage Tools Independent Electricity System Operator Rapid Migration of Big Data - Oracle DB using IBM Enterprise Storage Tools Presented by: Sajid Rizvi, Oracle Database Consultant / IT Architect IBM GBS Customer

More information

The Memory Hierarchy Part I

The Memory Hierarchy Part I Chapter 6 The Memory Hierarchy Part I The slides of Part I are taken in large part from V. Heuring & H. Jordan, Computer Systems esign and Architecture 1997. 1 Outline: Memory components: RAM memory cells

More information

INFORMATION TECHNOLOGY SPREADSHEETS. Part 1

INFORMATION TECHNOLOGY SPREADSHEETS. Part 1 INFORMATION TECHNOLOGY SPREADSHEETS Part 1 Page: 1 Created by John Martin Exercise Built-In Lists 1. Start Excel Spreadsheet 2. In cell B1 enter Mon 3. In cell C1 enter Tue 4. Select cell C1 5. At the

More information

Agenda. CS 61C: Great Ideas in Computer Architecture. Virtual Memory II. Goals of Virtual Memory. Memory Hierarchy Requirements

Agenda. CS 61C: Great Ideas in Computer Architecture. Virtual Memory II. Goals of Virtual Memory. Memory Hierarchy Requirements CS 61C: Great Ideas in Computer Architecture Virtual II Guest Lecturer: Justin Hsia Agenda Review of Last Lecture Goals of Virtual Page Tables Translation Lookaside Buffer (TLB) Administrivia VM Performance

More information

White Paper Arcserve Backup Greatly Reduces the Cost of Backup Using All-Flash Arrays with the Latest LTO Ultrium Technology

White Paper Arcserve Backup Greatly Reduces the Cost of Backup Using All-Flash Arrays with the Latest LTO Ultrium Technology White Paper Arcserve Backup Greatly Reduces the Cost of Backup Using All-Flash Arrays with the Latest LTO Ultrium Technology Unlimited Backup Capacity and Number of Generations Adoption of all-flash arrays

More information

6/4/2018 Request for Proposal. Upgrade and Consolidation Storage Backup Network Shares Virtual Infrastructure Disaster Recovery

6/4/2018 Request for Proposal. Upgrade and Consolidation Storage Backup Network Shares Virtual Infrastructure Disaster Recovery 6/4/2018 Request for Proposal Upgrade and Consolidation Storage Backup Network Shares Virtual Infrastructure Disaster Recovery Network Infrastructure Services - Server Team DC WATER & SEWER AUTHORITY (DC

More information

CS61C : Machine Structures

CS61C : Machine Structures inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures Lecture 35 Caches IV / VM I 2004-11-19 Andy Carle inst.eecs.berkeley.edu/~cs61c-ta Google strikes back against recent encroachments into the Search

More information

Cloud Eye. User Guide. Issue 13. Date

Cloud Eye. User Guide. Issue 13. Date Issue 13 Date 2017-08-30 Contents Contents 1 Introduction... 1 1.1 What Is Cloud Eye?... 1 1.2 Functions... 2 1.3 Application Scenarios... 3 1.4 Related Services... 3 1.5 User Permissions... 17 1.6 Region...

More information

Scheduling. Scheduling Tasks At Creation Time CHAPTER

Scheduling. Scheduling Tasks At Creation Time CHAPTER CHAPTER 13 This chapter explains the scheduling choices available when creating tasks and when scheduling tasks that have already been created. Tasks At Creation Time The tasks that have the scheduling

More information

TSM Paper Replicating TSM

TSM Paper Replicating TSM TSM Paper Replicating TSM (Primarily to enable faster time to recoverability using an alternative instance) Deon George, 23/02/2015 Index INDEX 2 PREFACE 3 BACKGROUND 3 OBJECTIVE 4 AVAILABLE COPY DATA

More information

REV SCHEDULER (iseries)

REV SCHEDULER (iseries) Powerful Scheduling made easy Run scheduled jobs in an unattended environment throughout your Enterprise to increase: Throughput, Accuracy, Efficiency. Base Model is native on all platforms Run REV SCHEDULER

More information

Eclipse Scheduler and Messaging. Release (Eterm)

Eclipse Scheduler and Messaging. Release (Eterm) Eclipse Scheduler and Messaging Release 8.6.2 (Eterm) Legal Notices 2007 Activant Solutions Inc. All rights reserved. Unauthorized reproduction is a violation of applicable laws. Activant and the Activant

More information

Under the Covers. Benefits of Disk Library for Mainframe Tape Replacement. Session 17971

Under the Covers. Benefits of Disk Library for Mainframe Tape Replacement. Session 17971 Under the Covers Benefits of Disk Library for Mainframe Tape Replacement Session 17971 Session Overview DLm System Architecture Virtual Library Architecture VOLSER Handling Formats Allocating/Mounting

More information

Global Support Program Services and Features Gold Diamond Platinum

Global Support Program Services and Features Gold Diamond Platinum Global Support Program Services and Features Gold Diamond Platinum Frequency: Weekly (52 Weeks) Windchill Cache Cleanup & Log Rotation Weekly System Health Checkup Review Windchill (Vault, DB, LDAP) Backup

More information

Symantec Design of DP Solutions for UNIX using NBU 5.0. Download Full Version :

Symantec Design of DP Solutions for UNIX using NBU 5.0. Download Full Version : Symantec 250-421 Design of DP Solutions for UNIX using NBU 5.0 Download Full Version : http://killexams.com/pass4sure/exam-detail/250-421 B. Applications running on the Windows clients will be suspended

More information

IBM TS7700 v8.41 Phase 2. Introduction and Planning Guide IBM GA

IBM TS7700 v8.41 Phase 2. Introduction and Planning Guide IBM GA IBM TS7700 8.41 Phase 2 Introduction and Planning Guide IBM GA32-0567-25 Note Before using this information and the product it supports, read the information in Safety and Enironmental notices on page

More information