HPE XP7 Performance Advisor Software 7.2 User Guide

Size: px
Start display at page:

Download "HPE XP7 Performance Advisor Software 7.2 User Guide"

Transcription

1 HPE XP7 Performance Advisor Software 7.2 User Guide Abstract This guide is intended for HPE XP7 Performance Advisor software administrators, users, and HPE service providers. This document describes how to collect, monitor, and manage configuration, performance and utilization details of XP/P9500/XP7 storage systems using the HPE XP7 Performance Advisor software. For the latest release information on this product, see the HPE XP7 Performance Advisor Software Release Notes. Part Number: T Published: December 2017 Edition: 1

2 Copyright 1999, 2017 Hewlett Packard Enterprise Development LP Notices The information contained herein is subject to change without notice. The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein. Confidential computer software. Valid license from Hewlett Packard Enterprise required for possession, use, or copying. Consistent with FAR and , Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. Links to third-party websites take you outside the Hewlett Packard Enterprise website. Hewlett Packard Enterprise has no control over and is not responsible for information outside the Hewlett Packard Enterprise website. Acknowledgments Intel, Itanium, Pentium, Intel Inside, and the Intel Inside logo are trademarks of Intel Corporation in the United States and other countries. Microsoft and Windows are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. Adobe and Acrobat are trademarks of Adobe Systems Incorporated. UNIX is a registered trademark of The Open Group.

3 Contents Introduction to HPE XP7 Performance Advisor Overview Getting started with PA...12 Start PA...12 Navigating the PA graphical user interface About the graphical user interface Main menu and Banner...12 Main menu...12 Icon descriptions...16 Session menu...17 Help sidebar Filters and Filter sidebar...18 Status and severity icons About Panes Component screen details Log out of PA...21 PA Licenses About licenses...22 About Instant-on license on HPE XP7 PA installation...23 Instant-on license activation Instant-on license expiration...24 Grace period expiration About HPE XP7 PA licenses Permanent licenses...26 Meter Based Term licenses Exceeding permanent licensed capacity and grace period...35 Exceeding meter-based term licensed capacity and grace period...36 Violating licensed capacity License screen details Generate licenses at the HPEAC license key website...40 Add licenses...42 Aggregate license status...43 View status for individual licenses...44 View license history...46 Remove licenses...47 Collection Schedules...50 About Collections Collections screen details Prerequisites Manage configuration collection About configuration data collections Configuration collection schedule screen details Start one-time configuration data collection Contents 3

4 Create/Edit recurring configuration data collection Delete configuration data collection schedules Manage performance collection...59 About performance data collections...59 Create performance data collection schedules Enable performance collection schedules for automatic updates Start performance data collection in case of a disk failure...66 View performance data collection schedules Edit performance data collection schedules...66 Stop performance data collection schedules Delete performance data collection schedules About communicating with host agents Request host agent updates Remove host agent information Monitor disk arrays About dashboards...70 Overview dashboard Overview dashboard screen details Component dashboard...77 Component dashboard screen details...79 Continuous Access dashboard Continuous Access dashboard screen details Performance dashboard...83 Performance dashboard screen details...85 Capacity dashboard Capacity dashboard screen details View associated components from Capacity dashboard...91 View dashboards...91 Manage and configure dashboards Add, remove and rearrange widgets Reset the widgets...92 Edit Threshold Add new licenses Set dashboard duration or number of top components...93 Save or dashboard statistics Charts...94 About Charts Charts screen elements Plot charts Arrays...99 About Arrays...99 Array screen details...99 Viewing other components in the array Ports About Ports Ports screen details Host Groups About Host Groups Host Groups screen details Host View About Host View Host View screen details Contents

5 Processors About Processors Processors screen details View top 20 consumers of an MP blade View MP Blade Utilization by processing types Cache About Cache Cache screen details LDEVs About LDEVs LDEVs screen details RAID Groups About RAID Groups RAID Groups screen details Thin Provisioning and Smart Pools About Thin Provisioning and Smart Pools Thin Provisioning and Smart Pools screen details Continuous Access About Continuous Access Continuous Access screen details Journals About Journals Journals screen details Use charts Auto Update charts View 50th, 90th and 95th percentile value in charts About Real-time monitoring Start/Stop real-time performance data collection About trending and forecasting Plot trending graphs Plot forecasting graphs Save charts as PDF or CSV files Set alert on array components from charts charts as PDF/CSV Zoom in on data points across performance graphs Rearrange or move chart windows Templates About templates Reuse/Apply a template Save template charts Modify or delete a chart template Monitor associated components View associated components Set top X components for associated tabs Select metrics for associated component tab View historic charts in the same screen by using Group by metric option PA Settings About PA Settings PA Settings screen details About saving and registering SVP Credentials Save/Register SVP credentials in PA About Settings Configure SMTP server settings Configure alert settings Contents 5

6 Configure reports settings Configure data collection settings Configure SNMP settings Configure PA Monitor Settings Manually configure database size About User Settings Set the severity level for events Set the time zone for management station Set the duration to predict the LDEV response time Set alias name for arrays Manage forecast settings Update real-time database Set the dashboard duration and the number of top components Receive notifications when PA services fail Custom Groups About Custom Groups Create custom groups View Custom Groups Modify custom groups Delete LDEVs/Custom Groups Create custom groups View Custom Groups Modify custom groups Delete LDEVs/Custom Groups Users About Users Create user records Change password Delete user records View group properties Configure and manage alerts Threshold settings About Threshold settings Threshold settings screen details for XP and P9500/XP7 disk arrays Set threshold limits for XP and XP7 disk arrays from Threshold Setting screen Importing threshold values from a different array Enable or disable alerts from PA setting screen Alerts About Alerts Alerts screen details Set alerts Enable or disable alerts Filter records based on metrics and alert status Set alert notifications Establish scripts for alerts Delete alert records Alert History About Alert History Alert History screen details Filter records in Alerts History table Manage events About Event Log Contents

7 Event Log screen details View event logs Filter event records Delete event records View disk array components About Summary view Plot summary view on Chart Work Area View Array Summary View Array Performance View Port summary View RAID Group summary View Top10 Frontend IO View Top 10 Backend IO View MP Blade utilization summary for XP7 disk arrays View pools summary for P9500/XP7 disk arrays View Continuous Access summary View LDEV summary Query and sort LDEV data Configuring column settings View CHA summary View DKA summary View CHA Info View DKA Info Multi array virtualization Virtual Storage Machine About Virtual Storage Machine VSM/Resource Group screen details View component report summary Business Copy About Business Copy Business copy screen details High Availability About High Availability High Availability screen details View High Availability information Manage PA database About PA database About Purge Manually purge data About Archive Archive data Migrate data to another management station Space requirements Migrate data using the Backup utility Save or restore data from the Windows command line About Importing data Import archived data to another management station Import archived data to the same management station Export Database About Export Database Export DB screen details Export DB CSV files Contents 7

8 Create Export DB CSV files Import data to MS Excel View Export DB CSV files Delete Export DB reports and schedules Reports About Reports Types of reports About RAID Group Utilization report About Continuous Access Journals report About LDEV IO report Create an LDEV IO report About LDEV Activity report Create an LDEV Activity report Reports screen details Generate and save one-time reports Schedule reports Report schedule examples View reports Delete reports Enable notifications Delete report schedules Virtualization for reports Log report details and exceptions Launch PA from other Storage products About launching PA from HPE XP7 Tiered Storage Manager View performance graphs for LDEVs View performance graphs for RAID Groups Support and other resources Accessing Hewlett Packard Enterprise Support Accessing updates Websites Customer self repair Remote support Document feedback Logical partitions Storage management logical partitions (SLPRs) Cache logical partitions (CLPRs) Sample reports Array performance report Total I/O Rate report Total I/O Rate by hour of day report Total I/O Rate Detail report Read/Write Ratio report Read/Write Ratio by hour of day report Read/Write Detail report Max/Min Frontend Port IOPS report Contents

9 Max/Min Frontend Port MB/s report LDEV IO report Total Backend I/O Rate First Top 8 LDEVs report Total Backend I/O Rate First Top 8 RAID Groups report Total Frontend I/O Rate First Top 8 LDEVs report Total Frontend I/O Rate First Top 8 RAID Groups/Pools report RAID Group Utilization Report Cache utilization report Cache Utilization report Cache Write Pending report Percentage Read Hits report Total Backend Transfer report Total Backend Transfer by Hour of the Day report Cache Side File Utilization report ACP utilization report ACP Utilization report ACP Utilization by Hour of the Day report CHIP utilization report CHIP Utilization report CHIP Utilization by Hour of the Day report CHIP Processor Utilization report ThP Pool Occupancy report Snapshot Pool Occupancy report Continuous Access Journal Group utilization report LDEV Activity report Export Database report All report MP blade utilization report Average utilization of an MP Blade MP Blade Utilization by top resources MP Blade Utilization by the processing types Metric Category, metrics, and descriptions Metrics and descriptions Real-time metrics Forecasting performance Contents 9

10 Introduction to HPE XP7 Performance Advisor Overview HPE XP7 Performance Advisor software (PA) collects, monitors, and displays the performance of XP and XP7 disk arrays. The disk array, or an array, is a complete storage system, including the control and logic devices, storage devices (HDD, SSD), connecting cables, and racks. The software enables you to identify performance bottlenecks by collecting historical data and presenting this for immediate investigation using charts and reports. It also enables you to compare the performance data of individual components such as Ports, RAID, Groups, LDEVs, and Processors. In addition to its GUI, PA also provides a command-line utility called the Command Line User Interface (CLUI) to monitor real-time performance of the XP and the XP7 disk arrays. The CLUI allows you to monitor performance, set alerts, and configure host information using commands and scripts. You can execute commands in the CLUI and view the same data that is displayed on the GUI. The CLUI utility is operating system specific and can be installed locally on the management station, or remotely on a client system. PA is a web-based application which includes the following resources: Resource Centralized database server Distributed management station Distributed host agents Browser-based interface Command-line presentation client Benefits Enables easy monitoring of multiple arrays from distributed management stations. Enables you to monitor and manage arrays with less effort and reduced cost of operations. Enables you to share interface with third-party tools for easy management of your storage and data center. Aims for easy user adoption. Easy to manage and use for advance-level users. Data communication between the resources is achieved through Internet-based protocols that eliminate geographical limitations to software resource distribution. HPE XP7 PA software: Major features Monitor the health status of managed arrays consolidated at SAN level. Collect and compare data about physical components of disk arrays, and flag the critical array components. Monitor large array configuration (PA can monitor up to 64k or LDEVs). Create custom groups to monitor specific LDEVs, and view a graphical representation of its performance. Dispatch notifications, SNMP traps, and run scripts for batch files when metrics cross defined thresholds. Trap is a type of SNMP message used to signal that an event has occurred. Generate reports and event logs on the overall performance of an XP or an XP7 disk array or on individual components in these arrays. Review a log of events that are triggered within PA. 10 Introduction to HPE XP7 Performance Advisor

11 HPE XP7 PA software: Key benefits Easy visualization of configuration and performance data via charts and graphs. Quick access to associated components for effective and thorough troubleshooting for performance bottlenecks. Configure and manage the extensive historical database. Displays details on pool savings in terms of compression, dedupe, Flash Module (FMD) compression. FMD is a high speed data storage device that includes a custom flash controller and several flash memory submodules on a single PCB. Coexist with other products such as HPE XP7 Command View AE and HPE XP7 TSM by sharing the PA management station, and save on floor space. Tiered Storage Manager (TSM) Software is used to perform migration, where data stored on one volume is moved to another volume in the disk arrays. Tiered Storage Manager moves data that resides on predefined set of volumes to another set of volumes with the same characteristics. PA also provides XPWatch, a troubleshooting tool that helps you to troubleshoot performance issues of XP and XP7 Storage systems. Launch PA from other products. P9000 disk arrays are also included as part of XP disk arrays Introduction to HPE XP7 Performance Advisor 11

12 Getting started with PA Start PA Procedure 1. Launch the application from the shortcut icon created on the Management Station. The PA login page appears. 2. To log in into PA, type your user name and password, and click Login. The overview dashboard screen appears upon logging in. PA supports five types of authentication. For more information, see HPE XP7 Performance Advisor Software Installation Guide. Navigating the PA graphical user interface About the graphical user interface The graphical user interface for PA comprises of various levels of screens. This multilevel approach enables you to easily identify the performance bottlenecks and start drilling into the resource instances that might need your prompt attention. The Overview dashboard shows the consolidated performance data at overall storage level for arrays being managed by PA. From the Overview dashboard, you can navigate to the other levels of dashboards. Using the dashboard levels of views, you can further drill down to the respective component screens and associated screens, and identify and troubleshoot performances issues. The PA features are broadly grouped under the main menu bar. At any point of time, you can navigate from one screen to any other screen using the main menu. Main menu and Banner The main menu and banner area include the main menu for selecting screens, a search box, sidebars for activities and help, and a session menu. Main menu The main menu is the primary method for navigating to resources and performing actions. The main menu is categorized as follows: Dashboards Provides a quick view of the key performance status and general health of the arrays that PA monitors. Components Provides a detailed view of the configuration and performance details of all the physical components on an array. Features Provides a detailed view of the configuration and performance details of all the features displayed in the figure below. Configurations Enables to issue configuration and performance collection, set threshold values for components and features, configure alerts, manage license and common PA settings. Reports Enables you to manage reports and PA database. 12 Getting started with PA

13 General Enables you to manage Event Log, purge and archive data, alert history, view custom groups and templates. Security Enables to manage PA users. Figure 1: Expanded main menu The main menu provides access to resources; each resource screen contains an Actions menu. Table 1: Main menu screens breakup Menu Menu item Description Dashboards Overview Provides a high level overview of multiple arrays that are managed by PA. For more information, see Overview dashboard on page 70. Component Performance Continuous Access Capacity Displays status at an individual array level. For more information, see Component dashboard on page 77. Displays status at an individual component level. For more information, see Performance dashboard on page 83. Displays the overall details of Continuous Access feature in XP7 disk arrays. For more information, see Continuous Access dashboard on page 81. Displays the physical total capacity and Pool capacity efficiency at an array level. For more information, see Capacity dashboard on page 89 Components Arrays Displays the component usage details of all the arrays that PA monitors during specified threshold time interval. For more information, see Arrays on page 99 Table Continued Getting started with PA 13

14 Menu Menu item Description Ports Host Groups Host View Processors Cache Displays the configuration, performance and utilization details for disk ports that are configured on the specified array at a given threshold duration, and the array components that are associated with ports. For more information, see About Ports on page 100. Displays the configuration details, and performance and utilization graphs for individual host group records, and its associated components.. For more information, see About Host Groups on page 102. Displays the configuration details, and performance and utilization graphs for individual host, and its associated components. Host view is currently based on Host Group and displays the aggregate view of all the Ldevs configured for a Host Group. For more information, see About Host View on page 105. Displays the configuration details, and performance and utilization graphs for individual Processors, and its associated components. For more information, see About Processors on page 107. Displays the configuration details, and performance and utilization graphs for individual Cache Logical Partitions (CLPRs), and its associated components. The CLPRs contains cache and parity groups. It is available on the XP12000, XP10000, and later generations of the XP/XP7 disk arrays. NOTE: CLPR0 always exists (cannot be deleted) and is a pool area for cache and parity groups that are not yet assigned to other CLPRs. For more information, see About Cache on page 114. LDEVs RAID Groups Summary View Displays the configuration details, and performance and utilization graphs for a list of LDEVs configured on the XP/XP7 disk array for a specified threshold duration, and the components that are associated with LDEVs. For more information, see About LDEVs on page 116. Displays the configuration, performance and utilization information for a list of RAID groups, and its associated components that are configured on the specified disk array. For more information, see About Summary view on page 202. Displays the summary of all components for an array. For more information, see About Summary view on page 202. Table Continued 14 Getting started with PA

15 Menu Menu item Description Features THP/SMART Pools Displays the configuration details, and performance and utilization graphs for individual ThP/Smart Pools records and its associated components. For more information, see Thin Provisioning and Smart Pools on page 123. Continuous Access High Availability Journals VSM/Resource Group Business Copy Configurations Collections PA Settings Threshold Settings Alerts Provides the configuration details, and performance on the pair status of the primary and secondary volumes. For more information, see Continuous Access on page 126. Provides the configuration details, and performance on pair status of the primary and the secondary arrays. For more information, see About High Availability on page 255 Displays the configuration details, and performance and utilization graphs for individual journal records, and components associated with it. For more information, see About Journals on page 128. Displays all the discovered resources in the array and provides a framework that manages the virtual IDs. Resource groups helps in preventing the risk of data leakage or data destruction by another storage administrator in another resource group. PA monitors the resources such as LDEVs, RGs, ports and host groups that are assigned to a resource group in an XP7 array. For more information, see Virtual Storage Machine on page 248 HPE XP7 Business Copy uses local mirroring technology to create and maintain a full copy of a data volume within the XP7 array. PA monitors the BC system on an ongoing basis to keep track of pairs and volumes and their current and past conditions. For more information, see Business Copy on page 254. Enables you to manage performance and configuration collections for all the arrays managed by PA. For more information, see About Collections on page 50. Displays options or settings to manage , register SVP credentials, configure PA database, and manage user settings. For more information, see About PA Settings on page 146. Provides options to configure and edit threshold values for metrics. Its also allows you to configure, enable or disable alerts. For more information, see About Threshold settings on page 174. Enabled you to activate alerts on components. For more information, see About Alerts on page 184. Table Continued Getting started with PA 15

16 Menu Menu item Description License Provides information on all the license keys installed on PA. This screen also allows you to manage PA licenses. For more information, see PA Licenses on page 22. Reports Reports Provides visual representation of the performance trend of components for a particular threshold duration. For more information, see About Reports on page 284. Export DB Displays options to export performance and utilization data into.csv files, schedule export activity. For more information, see About Export Database on page 272. General Event Log Displays events that are generated in response to various activities that you perform in the PA application. For more information, see About Event Log on page 199. Purge / Archive Alert History Templates Custom Groups Provides options to automatically or manually purge the data from the PA database. For more information, see About Purge on page 261, About Archive on page 264. Displays the history of alerts for components, if the alerts are already configured and enabled on them. For more information, see About Alert History on page 192. Enables you to save various components and metrics that you frequently monitor as template charts. For more information, see About templates on page 139. Loads the Custom Groups which you have created in the Summary View, and enables to plot real-time and historical data for all given metrics, perform trend and forecast, and create templates and export charts. Security Users Enables you to create user records, change password, delete user records, and view group properties. For more information, see About Users on page 171. Icon descriptions Icon Name Description Refresh Session Help Filter Renews the screen contents and display the updated set of records. Displays username and session duration; and provides the log out option. Displays (or hides) the Help sidebar. Displays (or hides) the Filters sidebar. Table Continued 16 Icon descriptions

17 Icon Name Description Sort Determines whether items are displayed in ascending or descending order. Displays the previous and next set of associated components. NOTE: The Collection, High Availability, and Alert History screens are auto refreshed every one minute. Session menu The Session menu to log out of PA. provides information about the user name and the session duration. Use this menu Help sidebar To open the help sidebar, click in the banner. The help sidebar provides hyperlink to the help system, open source code used in the product, partner program, initial configuration procedures, license agreement, and written offer. Screen element Help on this page Browse help Tutorial on this page Description Opens context-sensitive help for the current screen in a new browser window or tab. Opens the top of the help contents in a new browser window, which enables you to navigate to the entire table of contents for the UI help. Starts the in-context tutorial which provides guided workflow and the detailed description for the GUI elements. Table Continued Session menu 17

18 Support Displays links to download software for the following: Host agents CLUI HPE XPWatch HPE XPSketch HPE XPInfo About Displays the product part number, build, and version number. Filters and Filter sidebar The filter menus are displayed horizontally at the top of the master pane. Clicking on this filter icon displays the filter menu that enables you to control the amount and type of information to be displayed in description pane. The Arrays filter enables you to filter components based on the type of array. The Status filter enables you to filter component performance and utilization data based on the status data. By default, PA displays all the component that are configured on an array for a specified threshold duration. However, you can also filter by component name. For example, in the figure below, click the Port Name list view all the port components that are configured on the disk array. to Table 2: Filter by status Fields All status Ok Warning Description All records are displayed regardless of status. Indicates the usage of all components is below 95% of the set threshold limit during the specified threshold duration. Indicates that the usage of at least one component is at 95% of the set threshold limit or higher during the specified threshold duration. Table Continued 18 Filters and Filter sidebar

19 Fields Critical Unknown Description Indicates that the usage of at least one component has crossed the set threshold limit during the specified threshold duration. This may occur due to one of the following reasons: Does not collect performance data for an array during the specified threshold period even when the configuration data is already available. If a resource is not added as part of a performance schedule, then the performance data is not collected for that resource. If threshold values are not set/disabled for a resource. NOTE: If a component is assessed based on multiple metrics, and if any one of the metrics crosses the threshold limit, then the particular metrics and the overlying component is displayed as critical. Status and severity icons The status of a metric is based on the threshold value, which you can set in the Threshold Settings screen. The following table describes the different status icons that depict the overall health of an array. Large icon Small icon Name Description Critical Indicates that the usage of at least one component has crossed the set threshold limit during the specified threshold duration for an array. Critical events or errors are identified by a red error icon and require immediate intervention. Warning Indicates that the usage of at least one component is at 95% of the set threshold limit or higher during the specified threshold duration. Warning events are identified by a yellow warning icon. Ok Indicates that the usage of all components is below 95% of the set threshold limit during the specified threshold duration. About Panes Unknown Indicates that the performance data is not collected during the specified threshold duration. The component/feature screens are typically divided into two main content areas: the Master pane and the Detail pane. The master pane on the left displays all the components that are configured on an array. The detail pane on the right displays a chart work area with graphs plotted for default metrics, and an Actions menu to perform various actions. About Master pane The master pane displays a list of all components of a component/feature type for a particular threshold time. The most important parameters for a component are displayed in columns. Records are by default Status and severity icons 19

20 sorted by status with the critical components displayed on top of the list. You can sort data by clicking any of the column headings. About Detail pane The detail pane on the right displays the chart work area for metrics, time filter to help to see data of a specified time range, and an Actions menu to perform actions such as add more metrics, view associated components, plot real-time graphs, and so on. The Chart View displays the performance graphs plotted for metrics for a component during a specified threshold duration. By default, the performance graphs are plotted for the last 1 hour of the management station's time. Component screen details The following image is an example for a typical component and feature screen, and shows a list of Host Groups that are configured on a P9500 disk array at a specific time. Figure 2: Master pane and detail pane 20 Component screen details

21 S.No. Screen element Description 1 Master pane Typically displays the following items in columns: a status icon for each record and the type of component or feature. Other type of data displayed are: most commonly used metrics and associated components. Hover over the status icon to see the metrics which determine the overall status of the selected component. To plot data for all the components in the master pane, check Select All. 2 Total component records Displays the total number of components configured on an XP/XP7 disk array for a given threshold duration. 3 Array filter Displays all the arrays that PA monitors. 4 Status filter Displays the status of a component record. Detail pane 5 Component record Displays the name of the individual Host Group record that you have selected in the master pane. If you select more than one component, then the number of records are displayed. 6 Associate components tabs Enables you to view performance of the associated components. 7 Actions menu Enables you to perform various actions on the selected components. Log out of PA Procedure 1. In the HPE XP7 Performance Advisor main menu, click the Session icon. The Session sidebar appears. 2. Click Logout. For detailed information on system and browser requirements, see HPE XP7 Performance Advisor Software OS Support Matrix. For information about installing, upgrading, and modifying this product, see HPE XP7 Performance Advisor Software Installation Guide. Log out of PA 21

22 PA Licenses About licenses Every XP or XP7 disk array that is connected to an instance of PA must have a valid PA license to monitor the internal raw disk capacity of the XP disk array or the physical usable capacity of the P9500 and XP7 disk array. Capacity is the amount of data storage space available on a physical storage device, usually measured in bytes (MB, GB, TB, etc). PA monitors the internal raw disk capacity in an XP disk array or the physical usable capacity in a P9500 disk array for which a license is generated at the Hewlett Packard Enterprise Authorization Center (HPEAC) website (URL: myenterpriselicense.hpe.com/). Permanent license is applicable to monitor both physical usable and raw capacities. To monitor physical usable capacities, you can generate Permanent and Meter based Term licenses. The internal raw disk capacity refers to the total capacity of all the RAID Groups created on the XP disk array. It excludes the disk capacity occupied by the external RAID Groups and pool volumes, such as the thin provisioning pools and the virtual volumes. The Physical Usable Capacity is calculated as sum of the Total Capacity of the RAID Groups excluding External RAID Groups. Table 3: License management during installation or upgrade Installation or upgrade When you install PA 7.2 for the first time When you upgrade PA from 7.0 or later to 7.2, and PA was in the Instant-on period When you upgrade PA from 7.0 or later to 7.2, and PA was in the grace period License management You are provided an Instant-on license, which is automatically enabled after installation. The Instant-on license (trial license) is provided with every instance of PA. It is valid for a period of 120 days from the day you install PA, after which a grace period of 60 days is provided. During the 120 days tenure or the 60 days grace period, you must generate a Permanent license at the HPEAC license key website for each monitored XP or XP7 disk array, and install the license key on PA. PA continues to be in that state after the upgrade, until the Instant-on period expires or you generate a Permanent license for the monitored XP/XP7 disk arrays. PA continues to be in that state after the upgrade, until the grace period expires or you generate a Permanent license for the monitored XP/XP7 disk arrays. Table Continued 22 PA Licenses

23 Installation or upgrade When you upgrade PA from 7.0 or later to 7.2, and the following are detected: License management PA continues to be in that state until you generate a Permanent license for the monitored XP/XP7 disk arrays. License violation for one or more monitored XP or XP7 disk arrays PA is already in the license expiry violation duration When you upgrade PA from 7.0 or later to 7.2, and the following are detected: PA continues to be in that state until the grace period expires or you generate a Permanent license for the monitored XP or XP7 disk arrays. License violation for one or more monitored XP or XP7 disk arrays PA is already in the grace period When you upgrade PA from 7.0 or later to 7.2, and the following are detected: Configuration collections are not allowed after the upgrade process is completed. License violation for one or more monitored XP or XP7 disk arrays The grace period has already expired NOTE: When you upgrade to PA 7.2 from PA 7.0, the license will be based on the physical usable capacity. About Instant-on license on HPE XP7 PA installation The Instant-on license or the trial license is provided with every instance of PA. By default, this license is automatically enabled when you install PA. The following are important notes on the Instant-on license: It is either valid for a period of 120 days from the day you install PA or until the time you generate, and install a new license for one or more monitored XP or XP7 disk arrays. It cannot be generated from the HPEAC license key website. For more information on adding licenses, see Grace period expiration on page 26. Consider the following example, where HPE XP7 Performance Advisor is installed on 20th Aug'09 and monitoring five arrays (combination of XP and XP7 disk arrays). The Instant-on license is enabled on the same date and valid until the 19th Dec'09. If you add two more XP disk arrays on 31st Aug'09 for which configuration data is collected, the XP disk arrays are still monitored in the current Instant-on license mode only. The Instant-on duration of 120 days is not calculated separately for the additional two XP disk arrays. About Instant-on license on HPE XP7 PA installation 23

24 NOTE: The term, monitored XP or XP7 disk arrays, refers to those XP or XP7 disk arrays for which at least one round of configuration data collection is complete. The term, unmonitored XP or XP7 disk arrays refers to those XP or XP7 disk arrays for which configuration data collection is not yet performed. It is applicable across all the XP and the XP7 disk arrays that are monitored by the current instance of PA. It is not bound to a specific XP or XP7 disk array. During the Instant-on license period, PA can monitor unlimited internal raw disk capacities of multiple XP disk arrays and physical usable capacities of multiple XP7 disk arrays. You can perform all tasks on the XP and the XP7 disk arrays that PA supports, such as configuration and performance data collection, generating reports and charts to view the associated array components performance. It facilitates PA to collect data for all the monitored XP and XP7 disk arrays. It also allows you to perform new configuration data collection, so that PA can monitor additional internal raw disk capacities of the XP disk arrays and physical usable capacities of the P9500 disk arrays. You can perform configuration collection for an unmonitored XP or XP7 disk array during the instant-on period. After the configuration collection is performed, when the instant-on period expires and a valid license is not yet installed, PA switches over to the grace period of 60 days for that XP or XP7 disk array. Instant-on license activation PA indicates that the instant-on license is activated by displaying the following status message in the top pane of the Dashboard screen and the License screen. (The Overview dashboard is the first screen when you log in to PA): The Performance Advisor trial license expire on August 1, Please contact your HPE Representative to purchase the requisite Performance Advisor licenses to avoid disruption of Performance Advisor services. Where, month, day, and year is calculated as 120 days from the date when you install PA. Every day, a status message on the number of remaining Instant-on days appears under Comments in the License Status section against each XP or XP7 disk array record for which a Permanent license is not yet installed. For more information on the series of events that follow after an Instant-on license expires, see Instant-on license expiration on page 24 and Grace period expiration on page 26. Instant-on license expiration The Instant-on license expires when one of the following conditions are met: The Instant-on license period of 120 days is over and Permanent license is not yet installed for any of the monitored XP or XP7 disk arrays, PA does the following: It initiates a grace period of 60 days for all the monitored XP and XP7 disk arrays. During the grace period, you can monitor unlimited internal raw disk capacities of multiple XP disk arrays and physical usable capacities of multiple XP7 disk arrays. You can also perform all the PA related operations on the XP and the XP7 disk arrays. It includes collecting the configuration data for additional internal raw disk capacities on the XP disk arrays or physical usable capacities on the XP7 disk arrays. The configuration data collection is possible only until the date, the current grace period is valid. After the grace period expires, you cannot perform new configuration data collection on any of the monitored XP and XP7 disk arrays, for which Permanent licenses are not installed. The following changes are displayed on the License screen for each XP or XP7 disk array record: 24 Instant-on license activation

25 Screen elements Hardware Platform Serial Number Array Capacity (TB) Description Displays the type of the XP/XP7 disk array that is being monitored by PA. Displays the serial number of the XP/XP7 disk array. Displays the total internal raw disk capacity of an XP disk array or the physical usable capacity of a P9500 disk array. License Capacity (TB) Displays the capacity as 0. License Status End Date Comments Displays the status as Expired. Displays the end date of the grace period. Displays the following status message: No License Installed. Grace Period expire on month, day, year. Please purchase the required PA licenses now to continue using Performance Advisor on this Array. Where, month, day, year refers to the date till when the grace period is valid. The above status message is also displayed on the Dashboard screen. You have installed a Permanent license on PA for at least one monitored XP or XP7 disk array during the instant-on period, it does the following: Disables the instant-on license and initiates a grace period of 60 days on all the monitored XP and XP7 disk arrays, excluding those for which Permanent licenses are installed on PA Displays the following details in the License Status section for the XP disk array that has a Permanent license installed on PA: Screen elements License capacity (TB) License status End Date Description Displays the aggregate capacity of all valid license keys installed. Displays the current status of the license, as Installed. Displays Never, as Permanent license is for an unlimited duration. PA begins to monitor the XP disk array for the new installed license (Permanent). Ensure that you generate and install valid license keys for every XP or XP7 disk array being monitored, so that PA continues collecting data for those arrays. PA Licenses 25

26 Grace period expiration The grace period that follows the instant-on license is valid for 60 days. After the grace period expires and if valid licenses are not installed, the following changes occur: PA cannot monitor the XP or XP7 disk arrays for any new configuration changes made after the grace period is over. However, it continues to collect performance data for the current internal raw disk capacities of the monitored XP disk arrays or physical usable capacities of the monitored P9500/XP7 disk arrays. Configured alerts, notifications, reports, charts, and all other functions continue to work. However, the generated report contains a warning message for license expiry at the beginning of the report. WARNING: License violation was detected for this array. This report may not capture performance data about the recent configuration changes made in the <XP or XP7 disk array>. Please purchase the required Performance Advisor licenses immediately. The License screen displays the following changes: Screen elements License Status Description Displays the status as Expired. License Capacity (TB) Displays the capacity as 0. Comments Displays the following status message: License has expired. Grace Period has expired on month, day, year. Please purchase the required Performance Advisor licenses now to continue using Performance Advisor on this Array. Where, month, day, year refers to the date till when the grace period is valid. The above status message is also displayed on the Dashboard screen. For PA to continue configuration collection for additional internal raw disk capacities on the monitored XP disk arrays or physical usable capacities on the monitored XP or XP7 disk arrays, install Permanent licenses on PA for each of the XP or XP7 disk arrays. Contact your HPE representative to procure the additional licenses. About HPE XP7 PA licenses PA supports the following licenses: Permanent (for XP and XP7 disk arrays) Meter based Term license (only for XP7 disk arrays) To install these licenses on Performance Advisor, see the Hewlett Packard Enterprise Authorization Center (HPEAC) website: Permanent licenses Permanent licenses are primary licenses that you generate and install on PA to monitor an XP or an XP7 disk array. Permanent licenses are for an unlimited duration, perpetual, and unique to an XP or XP7 disk 26 Grace period expiration

27 array. After generating a Permanent license, the PA LTU and the registration number are bound to the following: The XP or XP7 disk array serial number. The XP or XP7 disk array type. The internal raw disk capacity or the physical usable capacity for which the license is generated. After installing a Permanent license, if you increase the internal raw disk capacity or the usable capacity beyond the Permanent licensed capacity, the existing Permanent license cannot be used. PA considers it as a license capacity violation and initiates a grace period of 60 days for that XP or XP7 disk array. To end the grace period: For an XP disk array, generate and install additional Permanent license for the internal raw disk capacity that you want HPE XP7 Performance Advisor to monitor. For an XP7 disk array, you can generate and install one of the following based on your requirement: A Permanent license. A Meter based Term license, which is a secondary license that works only if a Permanent license is already installed. IMPORTANT: It is mandatory that you have the PA registration number to generate a Permanent frame license. This registration number is included in the product entitlement certificate that is provided with every PA License To USE (LTU) purchased. For more information on generating licenses, see Generate licenses at the HPEAC license key website on page 40. If you have not received the product entitlement certificate, provide the required details in the HPE XP7 Performance Advisor License Entitlement Request text file available in the C:\%HPSS_HOME%\data\keys folder. Send an to HPE at licensing.ams@hpe.com with the completed document as a file attachment. The HPE XP7 Performance Advisor License Entitlement Request text file is available after you install or upgrade to PA. The internal raw disk capacity of 64TB and greater for an XP disk array is considered as an unlimited frame license (unlimited internal raw disk capacity) by PA. It implies that additional licenses need not be generated for PA to monitor disk capacities at 64TB and beyond. This is applicable only for the XP (24K or 20K) disk arrays. Frame license is now applicable for the P9500 and XP7 disk arrays. Therefore, if you purchase a frame license of 64PB, you get an unlimited license capacity. You need not purchase additional licenses if the usable capacity of the array increases, P9500 Performance Advisor will continue monitoring those arrays. Meter Based Term licenses NOTE: Meter based Term licenses are applicable for P9500 and XP7 disk arrays only. Meter based Term licenses are secondary licenses that you generate at the HPEAC website and install as add-on licenses in PA to monitor additional physical usable capacities. Meter based Term licenses cannot work independently and always need to be installed on a Permanent license. They are not a replacement to the Permanent license. Meter Based Term licenses 27

28 A Meter based Term license is generated in TB-Days for the usable capacity that you want to monitor and the duration for which you want to monitor. To calculate the total TB-Days of Meter based Term license that you require, use the following formula: Physical Usable Capacity * Duration (number of days)= TB-Days Meter based Term license IMPORTANT: Additional physical usable capacity refers to the physical usable capacity that is beyond the Permanent licensed capacity. A Meter based Term license cannot be installed on multiple management stations. Multiple Meter based Term licenses can be generated and installed on a management station. In such cases, the licenses are used successively. For more information on generating and installing Meter based Term license, see Generate licenses at the HPEAC license key website on page 40 and Add licenses on page 42. About meter based term license requirement Meter based Term licenses are useful when you want to monitor an additional physical usable capacity for a defined duration or when there is an unplanned surge in the physical usable capacity that might subsequently reduce. For steady state license requirements, use Permanent licenses. For dynamic license requirements arising out of varying business needs, use appropriate TB-Days of Meter based Term license. IMPORTANT: For the installed TB-Days to function properly, you must install a minimum of 1TB Permanent license. The following image illustrates the above mentioned cases. Example scenario 1 Consider that a small-sized company books air tickets online for its customers. The company has one XP7 disk array of 75TB physical usable capacity. A Permanent license is installed on 01/01/2016 to monitor the 75TB physical usable capacity. Based on the heavy online booking trend during December'15 - January'16 time frame due to Christmas and New Year celebrations, the company is expecting a surge in the online booking traffic beginning December'10 and continuing till the end of 1st week of January' PA Licenses

29 The company is confident that at least 50TB additional physical usable capacity is required during this time frame (estimated 39 days). In such cases, the company has two options: Generate and install a Meter based Term license, as the 50TB spike in physical usable capacity is for a limited duration. Generate and install a Permanent license for the 50TB physical usable capacity. Because the 50TB capacity is required for a short duration, it is economical to install TB-Days of Meter based Term license. If a Permanent license is installed for the 50TB physical usable capacity, it is used only till the first week of January'11, after which it remains unused until the 75TB current physical usable capacity increases by 50TB, and is constantly used. Following is the suggested Meter based Term license configuration: Generate 1950TB-Days of Meter based Term license to monitor 50TB additional physical usable capacity for 39 days. The 1950TB-Days are derived based on the following calculation: 50TB * 39 days = 1950TB-Days of Meter based Term license The following figure illustrates the scenario described. So, 50TB physical usable capacity is monitored every day beginning December'10 for the next 39 days. After the spike in physical usable capacity reduces to 75TB, PA uses the existing Permanent license that is already installed. So, the company has managed the short duration spike in physical usable capacity with Meter based Term license and also retained the Permanent license to monitor the existing 75TB physical usable capacity. Example scenario 2 Consider the scenario of another company that has to use PA to monitor a P9500 Disk Array (5TB physical usable capacity) for a duration of only 180 days. It is a one-time activity for a specific project. As it is a time bound project, Meter based Term licenses are recommended. 1. Generate and install 1TB Permanent license. Out of the 5TB physical usable capacity, 1TB is managed by the Permanent license. 2. Generate and install 720TB-Days Meter based Term license to monitor the additional 4TB physical usable capacity for 180 days. PA Licenses 29

30 4TB * 180 days = 720TB-Days Meter based Term license Meter based Term license activation and consumption Similar to a Permanent license, a Meter based Term license is also bound to the disk array serial number and the physical usable capacity for which the license is generated. Once installed, you can use the TB- Days to monitor the additional physical usable capacity based on your requirement. For example, if 90 TB-Days Meter based Term license are installed, you can use the 90TB-Days in any of the following ways: 90TB-Days to monitor 90TB additional physical usable capacity in one day. 90TB-Days to monitor 1TB additional physical usable capacity for 90 days. 90TB-Days to monitor 10TB additional physical usable capacity for nine days. Any usage where the duration (Y days) multiplied by the additional physical usable capacity (XTB) equals 90TB-Days. The following figure illustrates the use of Meter based Term license. At the time of installing the Meter based Term license, if the physical usable capacity is within the Permanent licensed capacity, the installed TB-Days remain dormant till the physical usable capacity exceeds the Permanent licensed capacity. They are activated only after the Permanent license is completely used. The TB-Days are used for the duration when the physical usable capacity exceeds the installed Permanent licensed capacity and the exceeded capacity can be managed by the installed TB- Days. NOTE: After the installed TB-Days are activated, PA verifies the remaining TB-Days every day after 1:00 PM and accordingly updates the TB-Days status on the License screen - License Status section. For more information on the License screen, see Add licenses on page 42. If the installed TB-Days are used in the first half of a day, the TB-Days status is updated after 1:00 PM on the same day. If the installed TB-Days are used in the second half of a day, the TB-Days status is updated after 1:00 PM on the next day. 30 PA Licenses

31 For example, a Permanent license is installed on 11/30/2010 to monitor 50TB physical usable capacity. In addition, 90TB-Days are also installed on the same day to monitor 10TB additional physical usable capacity later for nine days. As the physical usable capacity is still within the Permanent licensed capacity, PA does not use the 90TB-Days. The following table lists the fields that are updated on the License screen - License Status section when the 90TB-Days are installed for an XP7 disk array record. Screen elements - License Status section License Capacity Term (Days) License Status End Date Description Displays Permanent licensed capacity plus the installed TB- Days. Example, 50TB, +90TB-Days Displays N/A. Term (Days) indicates the total number of days when the installed 90TB-Days can be used. In this case, as the physical usable capacity is within the Permanent licensed capacity limit, the 90TB-Days are dormant and the Term (Days) are not shown. Displays the status as Installed. Displays Never.This is because, the Permanent license which is for an unlimited duration is currently active. Consider that the physical usable capacity exceeds the 50TB Permanent licensed capacity by 10TB in the first half of 12/03/2010. As a result, the 90TB-Days are activated and PA uses 10TB-Days, and updates the following fields after 1:00 PM on the same day. On 12/03/2010: Screen elements - License Status section License Capacity Description Displays 50TB, +80TB-Days This is because, 10TB-Days are used on 12/03/2010 to monitor the additional 10TB physical usable capacity on that day. It also indicates that +80TB-Days remain that can be used. Term (Days) Displays 8. Eight days is the remaining duration when the +80TB-Days can be used. License Status Displays the status as Installed. End Date Displays 12/10/2010. Calculated as eight days starting from 12/03/2010. Consider that the physical usable capacity exceeds the 50TB Permanent licensed capacity by 10TB in the second half of 12/03/2010. As a result, PA updates the above listed fields after 1:00 PM only on the next day (12/04/2010), though the 10TB-Days are already used from the 90TB-Days on 12/03/2010. After the installed TB-Days are completely used and one of the following actions is not completed, PA enters 60 days grace period for that particular XP7 disk array: PA Licenses 31

32 A Permanent or appropriate TB-Days of Meter based Term license installed to monitor the additional physical usable capacity. The additional physical usable capacity reduced to match the Permanent licensed capacity. In addition, PA reduces the installed TB-Days every day after 1:00 PM till one of the above-mentioned actions is performed. The reduction (negative count) of TB-Days is proportional to the additional physical usable capacity that needs to be monitored. After 60 days, a capacity violation is reported and PA stops monitoring that disk array. Consider that the 90TB-Days are completely used in the first half of 12/11/2010 and appropriate TB-Days are not yet installed to monitor the 10TB additional physical usable capacity. As a result, PA does the following: 1. Enters 60 days grace period at 1:00 PM on the same day and 0TB-Days are available. 2. Begins reduction (negative count) of installed TB-Days. The reduction continues for the next 59 days and the every day reduction is proportional to the 10TB additional physical usable capacity that needs to be monitored. After 60 days, a capacity violation is reported. After 1:00 PM on 12/11/2010 (Day 1 of grace period), the following fields on the License screen - License Status section display: Screen elements - License Status section License Capacity Description 50TB, 0TB-Days. This is because, 90TB-Days are completely used by 12/11/2010. Term (Days) Displays 0. Zero days, as there are no TB-Days to use. License Status End Date Displays Capacity Insufficient. Displays Expired. The License Capacity continues to display every day reduction in TB-Days till 02/08/2011 (60th day). The remaining fields listed in the above table remain the same. On 12/12/2010, the License Capacity shows 50TB, 10TB-Days On 12/13/2010, the License Capacity shows 50TB, 20TB-Days On 02/08/2011, the License Capacity shows 50TB, 590TB-Days NOTE: If 90TB-Days are completely used in the second half of 12/11/2010, PA enters 60 days grace period on the same day but updates the License screen - License Status section only after 1:00 PM on 12/12/2010. In this case, the License Capacity shows 50TB, 10TB-Days on 12/12/2010. For information on calculating the appropriate TB-Days to end the grace period and also read additional scenarios, see Exceeding meter-based term licensed capacity and grace period on page PA Licenses

33 A Meter based Term license least count is 1TB-Days. By default, fractions of a TB of physical usable capacity is considered as 1TB and fractions of a day is considered as one day. Example scenario 3 Consider the following scenario: 1. An XP7 disk array has a physical usable capacity of 50TB. 2. A Permanent license is installed on 11/20/2010 to monitor the 50TB physical usable capacity. 3. Another 100TB physical usable capacity is added on 11/30/2010 and must be monitored for 10 days. 4. As the physical usable capacity is beyond the Permanent licensed capacity, PA enters 60 days grace period on 11/30/ TB-Days Meter based Term license are installed on 12/02/2010. The following fields in the License screen License Status section display: License Capacity: 50TB, +1000TB-Days License Status: Installed Term (Days): N/A End Date: Never After 1:00 PM, the above fields are updated to display: License Capacity: 50TB, +1000TB-Days License Status: Installed Term (Days): 10 End Date: 12/11/ If 90.5TB is monitored on the same day, PA considers 91TB-Days and again updates the following fields to reflect the latest data, which is as follows: License Capacity: 50TB, +909TB-Days License Status: Installed Term (Days): 9 End Date: 12/10/2010 Nine days count from 12/02/2010 If 85.5TB is used during the first half of 12/03/2010 and no additional TB-Days are used for the rest of the day, PA uses 86TB-Days and considers the 0.5 days as one day. In addition, the following fields are updated after 1:00 PM to reflect the latest data, which is as follows: License Capacity: 50TB, +823TB-Days License Status: Installed Term (Days): 8 End Date: 12/09/2010 PA Licenses 33

34 Eight days count from 12/02/2010 If 100.5TB is used during the second half of 12/04/2010, PA uses 101TB-Days and considers the 0.5 days as one day. In addition, the following fields are updated after 1:00 PM on 12/05/2010 to reflect the latest data, which is as follows: License Capacity: 50TB, +722TB-Days License Status: Installed Term (Days): 7 End Date: 12/08/2010 Seven days count from 12/02/2010 So, 722TB-Days remain which you can use for a duration of seven days. A Meter based Term license is not bound to a specific start and end date. The TB-Days of Meter based Term license are used according to the quantity of physical usable capacity that is monitored and only the duration (number of days) matter. So, you have the flexibility to use the Meter based Term license at a stretch for X number of days or for a long duration based on your requirement. Example scenario 4 Consider the following scenario: 1. An XP7 disk array has a physical usable capacity of 50TB. 2. A Permanent license is installed on 11/15/2010 to monitor the 50TB physical usable capacity. 3. 5TB-Days are installed on 11/27/2010, because you plan to use another 1TB physical usable capacity every day later for five days. 4. If the physical usable capacity rises beyond the 50TB Permanent licensed capacity on 11/30/2010 and the additional physical usable capacity equivalent to 1TB-Days is used every day, PA uses the 5TB- Days completely by 12/04/2010. If the additional physical usable capacity equivalent to more than 1TB-Days is consumed a day, the 5TB-Days of Meter based Term license is accordingly used. Consider that the physical usable capacity equivalent to 3TB-Days is monitored on 11/30/2010, followed by 2TB-Days on 12/01/2010. As a result, the 5TB-Days ends by 12/01/2010. Example scenario 5 Consider the following scenario: 1. An XP7 disk array has a physical usable capacity of 25TB. 2. A Permanent license is installed on 11/28/2010 to monitor the 25TB physical usable capacity TB-Days are installed on the same day, because you plan to use another 1TB physical usable capacity every day later for 12 days. 4. If the physical usable capacity rises beyond the 25TB Permanent licensed capacity on 11/30/2010 and 3.5TB is used on the same day, PA considers 4TB-Days. 34 PA Licenses

35 5. If there has been no activity from 11/30/2010 till 12/28/2010, the remaining 8TB-Days are not used. 6. If 4TB is used on 12/29/2010 followed by another 4TB on 12/30/2010, PA considers 4TB-Days on each day, and the 8TB-Days are completely used by 12/30/2010. So, the TB-Days are used only when the additional physical usable capacity must be monitored. Exceeding permanent licensed capacity and grace period When the internal raw disk capacity of an XP disk array or the physical usable capacity of an XP7 disk array exceeds the Permanent licensed capacity, PA switches to the grace period of 60 days for that particular disk array. The License Status for such XP or XP7 disk arrays displays Capacity Insufficient in the View License Status section. The following informational message appears under Comments: Array capacity exceeds licensed Capacity which was detected on month, day, year. Grace Period expires on month, day, year. Please purchase the required Performance Advisor licenses now to continue using Performance Advisor on this Array. where: month, day, year is the date till when the grace period is valid. To continue monitoring the internal raw disk capacity, purchase the required PA LTUs and generate a Permanent license before the grace period expires. Similarly, to continue monitoring the physical usable capacity, you can generate a Permanent license or appropriate TB-Days of Meter based Term license based on your requirement. PA verifies the XP7 disk array physical usable capacity whenever you perform or schedule configuration data collection. Example scenario 7 Consider the following points: 1. An XP7 disk array has a physical usable capacity of 50TB. 2. A Permanent license is installed on 11/20/2010 to monitor the 50TB physical usable capacity. 3. Due to a surge in storage requests, another 25TB physical usable capacity is added on 11/30/2010 for a duration of five days. 4. As the additional physical usable capacity is greater than the Permanent licensed capacity, PA enters 60 days grace period. Because this is a short term unplanned surge in storage requests, you can install TB-Days of Meter based Term license to monitor the additional physical usable capacity for the specified duration. To monitor 25TB for five days (at the rate of 25TB a day), generate and install 125TB-Days on 11/30/2010. As 125TB-Days are sufficient for five days, PA ends the grace period and updates the following fields in the License screen License Status section: After installation on 11/30/2010: License Capacity: 50TB, +125TB-Days License Status: Installed Term (Days): N/A End Date: Never After 1:00 PM on 11/30/2010: Exceeding permanent licensed capacity and grace period 35

36 License Capacity: 50TB, +100TB-Days Consider that 25TB-Days are used after installation. License Status: Installed Term (Days): 4 End Date: 12/04/2010 If the remaining 100TB-Days are completely used in the first half of 12/04/2010 and any of the following is not done: The physical usable capacity is reduced to within the Permanent licensed capacity limit Extra TB-Days installed to monitor the 25TB physical usable capacity Another Permanent license installed to monitor the 25TB physical usable capacity As a result, PA again enters 60 days grace period after 1:00 PM on the same day and 0TB-Days are available. In addition, it begins reducing (negative count) the installed TB-Days by 25TB and updates the following fields in the License screen License Status section daily. After 1:00 PM on 12/04/2010: License Capacity: 50TB, 0TB-Days License Status: Capacity Insufficient Term (Days): 0 End Date: Expired If 100TB-Days are completely used in the second half of 12/04/2010, PA enters 60 days grace period on the same day but updates the License screen - License Status section only after 1:00 PM on 12/05/2010. In this case, the License Capacity shows 50TB, 25TB-Days on 12/05/2010. Exceeding meter-based term licensed capacity and grace period When the existing TB-Days are not sufficient to monitor the additional physical usable capacity, PA considers it as a capacity violation and enters a grace period of 60 days. During the grace period, PA begins reducing the installed TB-Days. Within 60 days, if a Permanent license is installed, PA stops reducing the TB-Days and ends the grace period for that particular XP7 disk array. If appropriate TB-Days are installed, PA uses part of the units to nullify the negative count and the rest of the units to end the grace period. The every day reduction in TB-Days is equal to the additional physical usable capacity because of which the grace period has started. NOTE: Reduction or negative counting is only applicable for the installed TB-Days. It is not applicable for Permanent licenses. After 60 days grace period, PA stops configuration data collection for any additional physical usable capacities. It continues performance data collection for the existing physical usable capacity. Example scenario 8 Consider the following points: 36 Exceeding meter-based term licensed capacity and grace period

37 1. An XP7 disk array has a physical usable capacity of 50TB. 2. A Permanent license is installed on 11/23/2010 to monitor the 50TB physical usable capacity. 3. Due to a surge in storage requests around 11/30/2010, another 10TB physical usable capacity is added for a duration of five days. 4. Because this is a short term unplanned request, it is addressed by installing 50TB-Days of Meter based Term license on 11/30/ After the installed TB-Days are consumed by 12/04/2010 and extra TB-Days are not added to monitor the 10TB physical usable capacity, PA does the following: a. Enters the grace period after 1:00 PM on 12/04/2010 and 0TB-Days are available. The following fields are updated with the latest TB-Days data: License Capacity: Displays 50TB, 0TB-Days License Status: Capacity Insufficient Term (Days): 0 End Date: Expired b. Begins reducing the installed TB-Days by 10TB every day starting from 12/05/2010: On 12/05/2010, 0TB-Days are reduced to 10TB-Days On 12/06/2010, 10TB-Days are reduced to 20TB-Days On 12/06/2010, 20TB-Days are reduced to 30TB-Days On 12/07/2010, 30TB-Days are reduced to 40TB-Days On 12/08/2010, 40TB-Days are reduced to 50TB-Days On 12/09/2010, 50TB-Days are reduced to 60TB-Days 6. The reduction in TB-Days continues till you install the appropriate TB-Days, so that PA does the following: Shows positive TB-Days Ends the grace period for that particular XP7 disk array 7. So, if you want to install TB-Days on 12/09/2010 for a duration of five days, generate 110TB-Days of Meter based Term license. Out of the 110TB-Days: 60TB-Days (6 days * 10TB) are used to nullify the reduction in installed TB-Days. 50TB-Days (5 days * 10TB) is required for PA to continue monitoring the 10TB physical usable capacity for another five days. With 110TB-Days, PA ends the grace period and continues to monitor the 10TB physical usable capacity for another five days. When fractions of a TB of additional physical usable capacity is monitored and the installed TB- Days are not sufficient, PA considers it as a capacity violation and enters a grace period of 60 days. In PA Licenses 37

38 such a case, if you install the appropriate TB-Days, PA ends the grace period for that particular XP7 disk array. Example scenario 8 Consider the following points: 1. An XP7 disk array has a physical usable capacity of 75TB. 2. A Permanent license is installed on 11/24/2010 to monitor the 75TB physical usable capacity. 3. Due to a surge in storage requests around 11/30/2010, another 20TB physical usable capacity is added for a duration of five days. 4. Because this is a short term unplanned request, it is addressed by installing 100TB-Days of Meter based Term license on 11/30/ As the physical usable capacity is beyond the Permanent licensed capacity, PA uses the 100TB-Days of Meter based Term license. Based on the physical usable capacity consumed, PA uses the appropriate TB-Days of Meter based Term license. Following are sample consumptions from day 1 to day 3: 11/30/2010: 51.5TB monitored using 52TB-Days 12/01/2010: 20.3TB monitored using 21TB-Days 12/02/2010: 24.9TB monitored using 25TB-Days 6. As 4TB physical usable capacity is remaining and only 2TB-Days are available, PA does the following: a. Enters 60 days grace period after 1:00 PM on 12/02/2010 and updates the following fields with the latest TB-Days data: License Capacity: Displays 75TB, +2TB-Days License Status: Capacity Insufficient Term (Days): 0 End Date: Expired b. Begins reducing the installed TB-Days by 20TB every day starting from 12/03/2010, which is shown as follows: On 12/03/2010, 2TB-Days are reduced to 18TB-Days On 12/04/2010, 18TB-Days are reduced to 38TB-Days On 12/05/2010, 38TB-Days are reduced to 58TB-Days 7. The reduction in TB-Days continues till you install the appropriate TB-Days of Meter based Term license, so that PA does the following: Shows positive TB-Days Ends the grace period for the particular XP7 disk array 8. So, if you want to install TB-Days on 12/05/2010 for a duration of five days, generate 160TB-Days of Meter based Term license. Out of the 160TB-Days: 38 PA Licenses

39 60TB-Days (3 days * 20TB) are used to nullify the reduction in installed TB-Days. 100TB-Days (5 days * 20TB) is required for PA to continue monitoring the 20TB physical usable capacity for another five days. With 160TB-Days, PA ends the grace period and continues to monitor the 20TB physical usable capacity for another five days. Violating licensed capacity After 60 days of grace period, PA considers it as a capacity violation and stops configuration data collection for any additional internal raw disk or physical usable capacity. It continues the existing performance data collection. During a capacity violation phase, if you do one of the following: Install a Permanent license for PA to monitor the XP disk array. Install a Permanent license or appropriate TB-Days of Meter based Term license for PA to monitor the XP7 disk array. Reduce the internal raw disk capacity of the XP disk array or the physical usable capacity of the XP7 disk array to match the Permanent licensed capacity. Then, perform configuration data collection. PA verifies the internal raw disk capacity of the XP disk array or the physical usable capacity of the XP7 disk array. If it is less or equal to the licensed capacity, the existing capacity violation is removed. License screen details Screen elements Description License Status Hardware Platform Serial Number Array Capacity (TB) Displays the type of the XP/XP7 disk array that is being monitored by PA. Displays the serial number of the XP/XP7 disk array. Displays the total internal raw disk capacity of an XP disk array or the physical usable capacity of a P9500 disk array. License Capacity (TB) Displays the capacity as 0. Term(Days) License Status End Date Displays the number of days until the license expires. Displays the status as Expired. Displays the end date of the grace period. Table Continued Violating licensed capacity 39

40 Screen elements Comments View License History Description Displays the following status message: No License Installed. Grace Period expire on month, day, year. Please purchase the required Performance Advisor licenses now to continue using Performance Advisor on this Array. Where, month, day, year refers to the date till when the grace period is valid. The above status message is also displayed on the Dashboard screen. View the events generated for each license key. The time stamp and description for each event is also displayed. Click Refresh to view the latest data on the License screen. Generate licenses at the HPEAC license key website Prerequisites Ensure that you have the registration number which is required for generating a license. The product license entitlement certificate includes a registration number, which is a unique identifier that helps you to generate a license key for PA. The registration number is unique to the XP disk array or the XP7 disk array for which it is used and cannot be associated with another XP or XP7 disk array serial number. The following sets of product entitlement certificates are available: XP entitlement certificates: The registration numbers can be used on XP disk arrays only, such as XP24000, XP20000, XP12000, and XP P9000 entitlement certificates: The registration numbers can be used on P9000 disk arrays only. XP7 entitlement certificates: The registration numbers can be used on XP7 disk arrays only. Based on the license entitlement certificate that you receive, generate and install a Permanent license for the internal raw disk capacity or the physical usable capacity that you want PA to monitor. You can generate Permanent licenses for the unmonitored XP or XP7 disk arrays and install them on PA. However, the license details for those unmonitored arrays appear in the View License Status section of the License screen, only after you collect their configuration data. Procedure 1. Access the HPEAC license key website, from your web browser. The Hewlett Packard Enterprise Authorization Center license key web page appears. 2. Click Generate a License Key in the Main Menu section. The Generate License Key screen appears. 3. Enter the registration number in the Registration Number or Product Authorization Key box. Ensure that the registration number is same as that mentioned in the product entitlement certificate. 40 Generate licenses at the HPEAC license key website

41 4. Click Next >>. The Array information input screen appears. The following details are displayed: Registration number PA base license Additional PA LTU that you purchased Internal raw disk capacity or physical usable capacity that the LTU supports 5. Provide the following details on the Array information input screen: Enter the Array DKC serial number, which is a five digit number, such as 10900, Select the Hardware platform from the list. The supported XP7 disk array and the XP disk array models, such as P9500, XP24000, XP20000, XP12000, XP10000 are displayed for selection. 6. Click Next >>. The Requestor Information screen appears. 7. Provide the requestor and the company related information, and click Next >>. The Requestor Information screen appears again with all the details that you provided. 8. Click Next >> to confirm the details. The Certificate screen appears and provides the license details. You can do the following on the Certificate screen. Click Save to print a copy of the certificate. Click Keyfile to save the license file as a.dat file on your system. Click to send a copy of the license certificate and the key file to the intended recipient. After installing the Permanent license for an XP disk array, if you want PA to monitor internal raw disk capacity beyond the Permanent licensed capacity, generate another Permanent license at the HPEAC website. Similarly, after installing the Permanent license for an XP7 disk array, if you want PA to monitor physical usable capacity beyond the Permanent licensed capacity, you can generate another Permanent license or a Meter based Term license at the HPEAC website. The procedure to generate and add the Meter based Term license is similar to the above procedure for Permanent license. While generating a Meter based Term license, you can select the TB-Days that you want to use when entering the registration number. For example, if you have a Meter based Term LTU for 100TB-Days and you want to use only 25TB-Days, enter 25 as the TB-Days. You are provided a license key that can be used for 25TB-Days. You can use the remaining 75TB-Days later for the same or a different XP7 disk array. The TB-Days of Meter based Term license that you generate are bound to the XP7 disk array serial number when the license key is generated. PA Licenses 41

42 Add licenses Procedure 1. In the HPE XP7 Performance Advisor main menu, click License. 2. From the Actions menu, click Add. 3. In the Add License page, click Browse, and add the licenses (.dat files) that you generated at the HPEAC license key website. 4. Click Open, and then click Add License. 5. Click Add License. The license details appears under the License Status pane. CAUTION: After the licenses are installed, do not modify the date and time on the management station where PA is installed. Modifying them may result in inaccurate configuration and performance collections. The following details are updated in the View License File Status section. These details are for the specific XP/XP7 disk array serial number for which the license is generated: Screen elements Hardware Platform Serial Number Array Capacity (TB) License Capacity (TB) Description Displays the type of XP/XP7 disk array that PA monitors. Displays the serial number of the array. Displays the internal raw disk capacity of an XP disk array or the physical usable capacity of an XP7 disk array. Displays the aggregate capacity of all valid license keys installed. The License Capacity (TB) is updated every day after 1:00 PM. If 15TB Permanent license is installed for an XP or an XP7 disk array, the License Capacity (TB) displays 15TB. If 15TB Permanent license and 100TB-Days of Meter based Term license are installed for an XP7 disk array, the License Capacity (TB) displays15tb, +100TB-Days. Table Continued 42 Add licenses

43 Screen elements Term (Days) Description Displays N/A after the TB-Days are installed. The Term (Days) is updated every day after 1:00 PM to show the remaining number of installed TB-Days. NOTE: Applicable only for Meter based Term license. Displays 0, if the installed TB-Days are insufficient to monitor the additional usable capacity. For an XP disk array record, the Term (Days) displays N/A, as only a Permanent license is used to monitor the internal raw disk capacity. The Term (Days) displays N/A for an XP7 disk array record, if the usable capacity is monitored using only the Permanent license. License Status End Date Comments Displays Installed, which indicates that new configuration collection is possible. If a Permanent license is installed for an array, the End Date displays Never, which indicates that the license is for an unlimited duration. If TB-Days are installed for an XP7 disk array, the date when the license ends appears. The End Date is updated every day after 1:00 PM to show the date when the installed TB- Days will be completely used. Displays the appropriate messages for each array. The message includes the type of license installed, license duration, and expiry date. To refresh the License screen, from the Actions menu, click Refresh. Aggregate license status PA maintains the following for an XP or an XP7 disk array, if you have generated and installed licenses for that disk array on PA: An aggregate of internal raw disk capacities in an XP disk array An aggregate of physical usable capacities in an XP7 disk array. If a Meter based Term license is installed, the TB-Days appear next to the Permanent licensed capacity. IMPORTANT: The above mentioned license details are displayed only for those XP and XP7 disk arrays, for which at least one round of configuration data collection is complete (monitored disk arrays). Aggregate license status 43

44 For a description of the columns displayed in the above image, see the table under Add licenses on page 42. You can also view the individual license details. View status for individual licenses Procedure 1. In the HPE XP7 Performance Advisor main menu, click Licence. 2. In the License Status pane, select an XP or an XP7 disk array record for which you want to view the status of individual licenses. 3. Click View Details. In addition to the details displayed in the License Status section, the following details specific to the installed license appear in the View License Detail section: Screen elements Key Type Description Displays the license type. PERMANENT METER Appears only when you select an XP7 disk array record for which TB-Days are installed in PA. Installed License Capacity Displays the capacity of the individual license keys. If you select an XP disk array record, this column always displays the Permanent licensed capacity. If you select an XP7 disk array record whose physical usable capacity is monitored using only a Permanent license, this column displays only the Permanent licensed capacity. If you select an XP7 disk array record for which both the Permanent license and TB-Days of Meter based Term license are installed, this column displays the Permanent licensed capacity and also the installed TB-Days. In the above image, the Installed License Capacity displays both the Permanent license capacity and the installed TB-Days. Table Continued 44 View status for individual licenses

45 Screen elements Licenses Available Description Displays the available license capacity. If you select an XP disk array record, this column always displays the Installed License Capacity value. If you select an XP7 disk array record whose physical usable capacity is monitored using only a Permanent license, this column displays the Installed License Capacity value. In case of Meter based Term licenses: a. If you select an XP7 disk array record for which both the Permanent license and TB-Days of Meter based Term license are installed, and if the TB-Days are currently active, this column displays the Permanent licensed capacity and the remaining installed TB-Days. b. If the installed TB-Days are completely used and additional TB-Days are not available, this column displays the Permanent licensed capacity and 0TB-Days of Meter based Term license. c. If the installed TB-Days are dormant, this column displays the Permanent licensed capacity and the TB-Days that you installed. If 50TB Permanent licensed capacity is active and you installed 10TB-Days that are dormant, the following are displayed: 50TB 10TB-Days NOTE: In any of the preceding three cases (1,2 and 3), if you remove the Meter based Term license, only the Permanent license capacity is displayed. Table Continued PA Licenses 45

46 Screen elements Expired Date Description If you select an XP or an XP7 disk array record whose physical usable capacity is monitored using only a Permanent license, this column is blank as the Permanent license is for an unlimited duration.in case of Meter based Term licenses: If you select an XP7 disk array record for which both the Permanent license and TB-Days of Meter based Term license are installed, and the installed TB-Days are dormant, this column is blank. If you select an XP7 disk array record for which both the Permanent license and TB-Days of Meter based Term license are installed, and the installed TB-Days are active, this column is blank. If the installed TB-Days are completely used and additional TB- Days are not available, this column displays the date when the installed TB-Days expired for each Meter based Term licensed capacity. This column is blank for the Permanent licensed capacity. NOTE: In any of the preceding three cases, if you remove the Meter based Term license, only the information regarding the Permanent license is displayed and this column is blank. In the above image: The Expired Date displays December 19, 2010 for 5TB-Days listed under the Installed License Capacity column, as 5TB- Days are completely used on that date. The Expired Date is blank for the 3TB Permanent licensed capacity, as it is for an unlimited duration. The Expired Date is also blank for the 45TB-Days listed under the Installed License Capacity column, as the 45TB-Days are still active. MBT Units Consumed (TB) MBT Consumption Date Displays the installed TB-Days that are used. The data displayed is for the last one week. Displays the date when the TB-Days are consumed. The data displayed is for the last one week. In the above image, the first record indicates that 2 MBT units are consumed on December 21, View license history The View License History displays the license history information for each license key. The time stamp when an event occurred is also displayed for each event record. 46 View license history

47 You can search for events generated during a specific duration by date. To do this, from the Actions menu, click Find Events. In the Find License Detail pop-up page, provide the start date and the end date, and time in the calendar, and click Find. To refresh the License screen, go to Actions > Refresh. Remove licenses Prerequisites You are logged in as an Administrator or a user with administrator privileges to remove permanent licenses from the MS. Procedure 1. In the HPE XP7 Performance Advisor main menu, click Licence. 2. In the License Status section, select the license record of the XP/XP7 disk array, and from the Actions menu, click Remove. 3. In the Remove License section, select Permanent from the License Type list. When you select the license type, all the licenses of that license type are removed. If you require an instance of that license type, select the respective.dat file and install it again. For more information on adding licenses, see Add licenses on page 42. Removing Meter based Term licenses for XP7 disk arrays PA removes the aggregate TB-Days of Meter based Term license. There is no option to remove the individual TB-Days of Meter based Term license. NOTE: Once a Meter based Term license is removed, it cannot be added again. However, another Meter based Term license can be installed. Consider the following scenario where different TB-Days have been installed: Remove licenses 47

48 Table 4: Meter based Term licenses for P9500 array with 105 TB-Days capacity License Capacity Available Capacity Status 100 TB-days 10TB-Days Active 100 TB-days 5TB-Days Active 100TB-Days 100TB-Days Active In the above table, the License Capacity shows the total count of the TB-Days installed, while the Available Capacity shows the remaining TB-Days. So, the aggregate TB-Days is 115TB-Days. When you remove Meter based Term license, the aggregate TB-Days are removed, which is 115TB-Days. You cannot remove individual TB-Days, such as 5TB-Days or 10TB-Days. In such Remove licenses 53 a case, PA considers it as capacity insufficient violation and enters a grace period for that disk array. You can install the required TB-Days of Meter based Term license to end the grace period. If the TB-Days count is negative, the removal of the Meter based Term license is not allowed. For example, consider the following scenario where different TB-Days have been installed: Table 5: Meter based Term licenses for P9500 array with negative TB-Days capacity License Capacity Available Capacity Status 100 TB-days 1TB-days Expired 100 TB-days 7TB-days Expired 100 TB-Days 3TB-Days Expired The aggregate capacity is 11TB-Days. In such a case, PA does not allow the removal of Meter based Term license keys and enters the grace period. You can install the required TB-Days of Meter based Term license to end the grace period. NOTE: If the aggregate capacity is 0TB-Days, you can still remove the Meter-based Term license. In such a case, PA enters the grace period and starts negative counting. 1. In the HPE XP7 Performance Advisor main menu, click Licence. 2. In the License Status pane, select the XP7 disk array record for which you want to remove the Meter based Term license, and from the Actions menu, click Remove. 3. In the Remove License dialog box, select METER from the License Type list. 4. Click Remove License(s). The Confirm Delete dialog box appears. 5. Click Yes. The message indicating the removal of the license appears on top of the Remove License dialog box. Once the Meter based Term Licenses are removed, you cannot install it again on the same management station. However, it can be installed on a different management station. The available capacity will be 48 PA Licenses

49 same as the license capacity of the Meter based Term license key. When the positive Meter based Term license units are removed from one management station and re-installed on another, only the unused Meter based Term units are added. Consider the following scenario where different TB-Days have been installed: Table 6: Meter based Term licenses for P9500 array License Capacity Source Management Station Target Management Station 1TB, 100TB-days 90TB-Days are consumed, then remove the Meter based Term units Only 10TB-Days are added If the 90TB-Days are consumed and if we remove the Meter based Term units and install it on a different management station after the data is imported. Only 10TB-Days units Meter based Term license will be added which was left during the removal of 100TB-Days units using Backup and restore tool. If the permanent license is removed, when the Meter based Term license has a positive count. It will enter the grace period and Meter based Term license will not work. PA Licenses 49

50 Collection Schedules About Collections PA interacts with the arrays through hosts that have the operating system specific PA host agents installed. These hosts form the channel of communication between PA and the arrays. A channel is a path along which signals can be sent; for example, data channel and output channel. Once the host agents are installed, the corresponding records automatically appear in the Host Information pane in the Collections screen. To create and assign command devices to host agents, refer the HPE XP7 Performance Advisor Software Installation Guide. Once the command devices are configured on the array, and assigned it to the host agent for the array, the corresponding command device details appear in the Config and Performance Collection table in the Collections screen. You can request for a host information update every time you modify the configuration of the host agent or the associated array. You can remove records of the host agents that are no longer connected to the management station. After the host agents are discovered, start the configuration data collection, followed by the performance data collection for an array using its corresponding command device. Ensure you have saved and registered the SVP credentials before issuing configuration collection. The configuration data collection can be a one-time activity or scheduled periodically on a daily, weekly, or a monthly basis. During a scheduled configuration data collection, the data pertaining to the new components are automatically included in the latest configuration data. During a one-time configuration data collection, the data on new components cannot be collected after the configuration collection is complete. You must perform another one-time configuration data collection to receive the updated configuration data. To view the configuration summary details of the arrays, navigate to Summary View > Array Summary. Collections screen details 50 Collection Schedules

51 Table 7: Config and Performance Collection Screen elements Array Host ID Port Cmddev DeviceFile Description Displays the arrays that communicate with PA connected hosts. Specifies the DKC number of the array. Displays the system name of the host. Displays the port that is configured to communicate data between the command device (on an array) and the associated host agent. Displays the ID of the LDEV that is configured as a command device. Displays the device file for the command device. Config Collection TimeStamp Frequency Scheduled MS Host Name Status Displays as 0 if the configuration collection is not yet initiated for an array. After the configuration data collection is complete, the TimeStamp displays the latest date and time when PA receives the complete configuration data from the arrays. Displays as none if the configuration collection is not yet scheduled or a one-time configuration data collection is performed on the array. For configuration collection, the frequency of the collection schedule depends on the schedule time which you set in the Create Config page. For example, if you set Collection Schedule as Weekly, day as Sunday, and Start Time as 5 hours, it indicates that PA collects the configuration collection for the array, every week on all Sunday at 5 AM. The schedule type and duration (date and time stamp) are displayed under Frequency, only if you have scheduled the configuration data collection. Displays the name of the MS host where configuration schedules are created. Hover the pointing device over the status icon to know the status, which is displayed as a tool tip. The configuration collection (one-time or scheduled) status displays one of the following statuses: Collection Success, Collection in Progress, Not Yet Done, or Collection is Failed. Click Status to sort and display the array on top for which you are issuing config collection. Performance Collection Schedule Name Displays the schedule name that you have provided for the specified performance collection cycle. Table Continued Collection Schedules 51

52 Screen elements Status Frequency in min (s) (DKC, RG, Ports) Description Displays the status of the Performance Collection Schedule. Hover the pointing device over the status icon to know the status, which is displayed as a tool tip. Displays the frequency of performance collection schedule in minutes for DKC, RG, and Port components. In performance collection, frequency indicates the time interval in minutes for which the performance data is updated. For example, if you have set the Frequency for the Ports component as 4 in the Create/Edit Perf page, it indicates that the performance collection for Ports is updated at every 4-minute interval. Performance Collection Timestamp DKC, RG, Port Displays the date and time when PA receives the complete performance data from the XP/XP7 disk array for the DKC, RG, and Port components. Click refresh icon to perform a manual refresh of the Config and Performance Collection table. Table 8: Host Information Screen elements Host OS HA Version RMLib Version Updated Status MS Host Name MS Version Description Displays the system name of the host. Displays the operating system installed on the host and its current version. Displays the version of the host agent installed on the host. Displays the RAID Manager Library version 1 installed on the host. Displays the date and the time stamp when the configuration device file was sent to the management station. Displays the status of the host update request as either Requested or Received. Displays the name of the machine on which host agent is installed. Displays the version of the PA management station. 1 The RAID Manager Library (RML, RMLIB) is an API library that enables third-party software products to directly operate some of the functions on the P9500/XP7 and XP disk arrays. 52 Collection Schedules

53 Prerequisites CAUTION: Ensure that the date and time on the management station, database system ( in decoupling setup), and the hosts are synchronized with the local time zone to receive accurate configuration data. This condition is also applicable for the client systems that use the IE browser to access PA on a management station, and the systems that have the Command Line User Interface (CLUI) software installed. The following are important notes applicable for both the one-time and scheduled configuration data collection: Setting the collection is not possible without first Registering the SVP credentials. To register or save the SVP credentials, click Register/Save SVP Credentials in the Collections screen. Alternatively, navigate to PA Settings > Register/Save SVP Credentials. Select only one command device for an array to perform the configuration data collection for that array. When a configuration data collection is in progress for an array, do not initiate another configuration data collection for the same array. The configuration changes made to an array after a one-time collection is not automatically updated in PA. You must perform the one-time configuration data collection again to receive the latest configuration data for that array. However, if you have scheduled the configuration data collection for an array, PA then automatically retrieves the latest configuration data as per the selected schedule frequency. Ensure to complete a collection that you have initiated before issuing another collection for an array. If an array is connected to a host agent that is running on HP-UX 11i v3 operating system, the Device Special File (DSF) is displayed in a new format. A legacy DSF is displayed in parenthesis next to the new format. In the non windows systems, the DSF is an interface for a device driver that appears in a file system as if it were an ordinary file. In the Windows systems, the DSF allows the software to interact with a device driver using the standard input or the output system calls. Components represent the External RG(s) information. With every configuration data collection for an array, PA gets the latest internal raw disk capacity of that array or the latest physical usable capacity of that array. These values are updated under Array Capacity (TB) in the License screen. So, it is necessary that the SVP for the array is online and accessible, not locked by any other user, or under maintenance. If configuration data collection is in progress for an array, PA stops collecting configuration data in the following cases: If the physical usable capacity of a P9500 or XP7 disk array or the internal raw disk capacity of an XP disk array cannot be updated In such a case, PA displays the following error message in the Collections screen: Array capacity data could not be fetched through SVP for the XP disk array <serial_num> or XP7 disk array <serial_num>. Ensure that the XP disk array or the XP7 disk array is online, the SVP is accessible and not under maintenance, or locked by another resource. The configuration data collection will not be allowed for this XP disk array until the problem is resolved. Simultaneously, an event is also logged on the Event Log screen. If you have scheduled a configuration data collection and it fails, a failure notification message is also sent to the recipient address, specified in the Data Collection Settings section on the Settings Prerequisites 53

54 pane in PA Settings. You can also configure the Data Collection Settings by clicking SMTP/ SNMP Setting from the Collection screen. If the license has expired or licensed capacities have exceeded the grace period for an XP or an XP7 disk array In such a case, PA displays the following error message in the Collections screen: Configuration collection is stopped due to license violation for array <serial_number> Simultaneously, an event is also logged on the Event Log screen. Manage configuration collection About configuration data collections After the host agents are discovered, you can start the configuration data collection for the arrays using its corresponding command device. You can collect the configuration data for the arrays in the following ways: One-time configuration data collection: Use this collection type if you want to collect the configuration data only once. Any new configuration changes to the XP, P9500, and XP7 disk arrays, such as new components that are added after the collection completes are not captured in the existing configuration collection. Recurring configuration data collection: Use this collection type if you want to schedule the configuration data collection periodically on a hourly, daily, weekly, or a monthly basis. Based on the schedule frequency, PA collects the updated configuration data from the XP, P9500, and XP7 disk arrays. If you want PA to automatically collect the performance data for new components, you must enable the corresponding performance data collection schedules to automatically accept and monitor the performance of new components. After the host agents appear under the Host Information pane, the details from the command devices are also displayed in the Config and Performance Collection table in the Collections screen. Configuration collection schedule screen details The following table describes the configuration collection schedules in the Create Config page. 54 Manage configuration collection

55 Collection Schedule Description Examples Hourly Daily If the collection schedule is selected as Hourly, the Hourly Schedule list appears. Select the schedule frequency as 1 hour, 6 hours, or 12 hours. The Start Time list is not enabled for the Hourly collection schedule. If the collection schedule is selected as Daily, provide the schedule start time. Every time the schedule is executed, PA collects the configuration data for the last 24 hours only. If you create an hourly schedule at 12:30 PM and set the schedule frequency as 1 hour, PA executes the schedule only at1:00 PM and collects data for the next one hour. A new instance of the schedule executes at 2:00PM again. This process repeats for every 1 hour. Table Continued Collection Schedules 55

56 Collection Schedule Weekly Monthly Description If the collection schedule is selected as Weekly, the Day of the Week list appears. Select the day and provide the schedule start time. Every time the schedule is executed, PA collects the configuration data for the last one week only. If the collection schedule is selected as Monthly, the Monthly Schedule appears with options for scheduling the collection on a particular date (Based on Date) or day (Based on Day) of a month. Every time the schedule is executed, PA collects the configuration data for the last one month only. If you want to schedule the collection on a particular date: Select the Monthly Schedule as Based on Date, if it is not selected by default. Provide the schedule start time by selecting from the Start Time list. From the Date of the Month list, select the date when you want the schedule to execute every month. By default, 1st is selected as the default date of the month. Examples If you select Sunday as the day of the week, 2 as the week of the month, and start time as 8:00 PM, PA executes the schedule on the 2nd Sunday of every month at 8:00 PM. If you want to schedule the collection on a particular day: Select the Monthly Schedule as Based on Date, if it is not selected by default. From the Day of the Week list, select the day when you want the schedule to execute every month. By default, Sunday is selected as the day of the week. Select the week to which the day belongs from the Week of the Month list. This is a mandatory selection. By default, the 1st week is selected as the week of the month. Provide the schedule start time by selecting from the Start Time list. Start one-time configuration data collection Prerequisites These prerequisites are common for both the one-time and scheduled configuration data collection: 56 Start one-time configuration data collection

57 Start the configuration data collection only when a command device is created on an array. For more information on creating command devices, refer to the HPE XP7 Performance Advisor Software Installation Guide. Procedure 1. From the main menu, click Collections. 2. Click the Config and Performance Collection table. The table displays the list of command device records for all the arrays that are monitored by PA. 3. Select the command device record corresponding to the array for which you want to collect the configuration data. Alternatively, click an XP or an XP7 disk array icon displayed above the Configuration Collection tab to view the corresponding set of records highlighted in the Configuration Collection table. The existing set of records are automatically sorted to display the command devices that belong to the selected XP or XP7 disk array at the beginning of the Configuration Collection table. 4. Click Create/Edit Config. 5. Retain the Collection Period as Collect Now (default selection). 6. Based on the disk array that you selected, following are the further course of steps: For an XP disk array Manually enter the SVP IP address in the SVP IP Address text box and proceed to next step to initiate the configuration data collection. If you have already registered the SVP of the selected XP disk array with the respective management station, the corresponding SVP IP address is displayed in the SVP IP Address text box. For an XP7 disk array The corresponding SVP IP address and RWC user name and password are stored in the respective text boxes, if you already saved these credentials in PA for the selected XP7 disk array. If you have privileges to read the disk array configuration (minimum required - Storage Admin role with View privilege), select the Authentication Enabled check box, and then proceed to next step to initiate the configuration collection. This authentication is required to collect configuration data of the disk array. Before enabling authentication, ensure that you first save the credentials using the Register/Save SVP Credentials link in the Collections screen. Alternatively, navigate to PA Settings > Register/Save SVP Credentials to save the data. If you do not want to enable the authentication, proceed to next step to initiate the configuration collection. If the SVP IP address is not saved earlier, you can manually enter the SVP IP address. If authentication is required and you do not enable it, the configuration data collection will fail. 7. Click Submit. PA starts collecting the configuration data for the array through the selected command device. The collection status in the Config Collection table displays as a white circle indicating the collection is in progress. Place your device point over Status, and you can see the status as Collection in Progress. After the configuration data is collected, the collection status icon displays a green circle indicating the collection was successful. The Timestamp column displays the updated time stamp when PA completes receiving the latest configuration data. Collection Schedules 57

58 Click Refresh to restore the default settings. Create/Edit recurring configuration data collection Prerequisites The schedule start time is set to the management station time where PA is installed. Procedure 1. From the main menu, click Collections. The Config and Performance Collection table displays the list of command device records for all the arrays that are monitored by PA. 2. Select the command device record corresponding to the array for which you want to collect the configuration data. 3. PA checks for the credentials and registered status of the array. 4. If the array is not registered, click Ok on the Confirmation Message dialog box. 5. Click Create/Edit Config. 6. Enter the valid SVP User Name and SVP Password on the Edit Register/Save SVP Credentials dialog box for the array and click Save & Register. After registering the array, the success message is displayed. 7. Retain the Collection Period as Collect Now (default selection) as Collect Now and Schedule, on the Create/Edit Config dialog box. If you want to only schedule a configuration collection, clear the Collect Now checkbox. NOTE: Select at least one option to enable the Create button. 8. Select one of the following as the Collection Schedule. By default, the collection is scheduled for every Sunday at 00:00 hours: Hourly Daily Weekly Monthly 9. For Hourly Schedule specify the following : Hours for Hourly option Start time for Daily option The day and the schedule start time for Weekly option For monthly option, select either Based on Date or Based on Day. Select the Date of the Month and the schedule Start Time for Based on Date option. 58 Create/Edit recurring configuration data collection

59 Specify the Day of the Week, the week to which the day belongs in Week of the Month, and the Start Time for Based on Day. 10. Click Create. If you want to edit a configuration collection schedule which you have created, select the command device corresponding to the array for which you want to edit. Click Create/Edit Config, and in the Create/Edit Config dialogbox, modify the recurring schedule, and click Create. PA starts collecting the configuration data for the array through the selected command device. The Collection Status shows a in-progress icon indicating the collection is in progress. After the configuration data is collected, the Collection Status icon displays a green circle indicating the collection was successful. The Timestamp column displays the updated time stamp when PA completes receiving the latest configuration data. The latest configuration data is automatically updated in PA. If there are new components that you want to monitor, enable the associated performance data collection schedules to automatically collect data for the new components (RAID Groups and ports). NOTE: If there are multiple command devices for an array, you can create or edit the configuration collection schedule on any of the command device. Configuration schedules on any other command devices created previously is deleted. Delete configuration data collection schedules Procedure 1. From the main menu, click Collections. 2. Select a command device record corresponding to the XP/XP7 disk array for which you want to delete the configuration data schedule in the Config and Performance Collection table. 3. Click Delete Config. When a configuration data collection schedule is deleted, PA stops the collection from the subsequent scheduled collection cycle and then deletes the schedule. However, the current collection stops only after the latest configuration data is collected or when the scheduler time resets to 60 minutes. For example, if you scheduled a configuration data collection for two hours at 10:00 AM and stopped the schedule at 10:30 AM, PA still continues with the configuration collection that was initiated at 10:00 AM. However, after the current collection completes or the scheduler time resets to 60 minutes, PA does not initiate a new configuration data collection. Manage performance collection About performance data collections After you complete collecting the configuration details for an array, check if the Config Collection Status for that array is complete. The status changes to green icon upon completion, and the status icon displays as Collection Success or Failed when you hover over the status icon. You can also check the status of the action from the Event Log. After the configuration collection is complete, you must select the array record again, and schedule the performance collection for the associated components, which belong to the following component types: DKC Ports Delete configuration data collection schedules 59

60 RAID Groups Ext RAID Groups THP pools Snapshot pools Cont. Access Journals You can create two performance data collection schedules for an array, as it enables you to frequently monitor the respective components. The components that are not selected as part of the first schedule are automatically added to the second schedule. You can set different collection frequencies to collect data for the DKC, ports, and the RAID Groups. You can also enable the schedules for automatic updates. The collection frequency set for a RAID Group in a schedule is applicable for any component of the continuous access journal, snapshot, ThP, and the external RAID Group types. It is applicable only if they are selected for performance data collection. The following are important notes on the performance data collection: The performance data collection does not start when the configuration data collection is in progress. It starts automatically when the configuration data collection completes. If you set the frequency of the configuration data collection schedule as one hour, then after every hour, the performance data collection stops and restarts only when the configuration data collection completes. On successful completion, the Event Log screen displays records for the generated events. In case of performance data collection failure, the Event Log screen displays the failure messages. NOTE: If you plot performance graphs when the configuration data collection is in progress, there would be gaps in the data points. These gaps might occur only a few times when the configuration data collection is in progress and not throughout. They indicate that the performance data is not collected when the configuration data collection is in progress. While creating a performance data collection schedule, you can select a command device that is mapped through two different ports. While creating performance data collection schedules, you cannot split the components available in the respective component type lists into two schedules. For example, if you create two performance data collection schedules, the components that you select from the DKC, Port, and RG component type lists for the first schedule cannot be included in the second schedule. In a multipathing environment, ensure that a command device is not exposed to a host from two different ports. It stops the current performance data collection, as the schedule configured to obtain the collection is automatically deleted. HPE recommends that you set the data collection rate to one hour or less because of management station performance and field rollover. PA collects performance data on all LDEVs in the arrays that communicate with the management station through their respective hosts. The hosts that display their status as Received under the Host Information tab constitute the superset of the mapped LDEVs. The performance data collection is not limited to the number of LDEVs that the host agent is mapped to use. If you set the collection rate too narrow (less than 5 minutes), it results in reduced responsiveness from the management station. If you are collecting the performance data for the first time, it takes longer than usual for PA to collect the data. The subsequent performance data collection is as per the time specified in the Frequency section. If you have performed a one-time configuration data collection and later the XP/XP7 disk array configuration is modified, HPE recommends that you perform a fresh configuration data collection, so that the performance data collected is for the latest configuration. 60 Collection Schedules

61 Create performance data collection schedules IMPORTANT: Only one schedule can be created on a selected command device. For a better performance, select a maximum of two command devices that belong to different ports. A schedule cannot be created for the same array through two different host agents. HPE recommends that you should allow two minutes per 1,000 LDEVs for the management station to keep up with the collection. PA collects the performance data on all the LDEVs. You can set the collection interval for the performance data collection to a minimum of one minute or a maximum of 60 minutes. If you select all the seven component types, you must set a minimum frequency of 15 minutes and a maximum of 60 minutes. For viewing and collecting the performance data for large number of LDEVs, you must configure the java heap size settings in both the management station and the host system. The command device, for which PA is configured to collect the performance data should have Logical Unit Number (LUN) path configured from all the SLPRs. A LUN results from mapping a SCSI logical unit number, port ID, and LDEV ID to a RAID group. The size of the LUN is determined by the emulation mode of the LDEV and the number of LDEVs associated with the LUN. For example, a LUN associated with two OPEN-3 LDEVs has a size of 4,693 MB. If configured in an array, it is mandatory that you select all the continuous access journal, snapshot, and the ThP components from the respective component type lists. HPE recommends that you clear the cache on your IE browser before creating a schedule. Ensure the Every time I visit check box is checked. Procedure 1. From the main menu, click Collections. 2. In the Config and Performance Collection table, select the array for which you want to schedule performance data. 3. Click Create/Edit Perf. The Create/Edit Perf is enabled only when you select an array record from the table. Depending on whether it is the first or the second performance data collection schedule that you are creating, PA validates the following and displays appropriate error messages if the validations fail: If it is the first performance data collection schedule, PA verifies if the configuration data is already available for that array. If it is the second performance data collection schedule, in addition to the configuration data, PA also verifies: Whether the second schedule is using a different port than the one used by the first schedule. Whether the same host is used to communicate with the selected array. 4. Enter a schedule name in the Schedule Name box. Create performance data collection schedules 61

62 The performance schedule name can be a maximum of 32 alphanumeric characters. You can insert a space and also use special characters, such as underscore (_) and apostrophe ('). 5. For an P9500/XP7 disk array, you must perform an additional step. If the command device being used for collection has authentication enabled, select the Authentication Enabled check box. Enter the RWC user name and password that has privilege to use the command device, and proceed to the next step to select the components. If you select the same command device that was used for configuration data collection and if the authentication was enabled for the configuration collection, the RWC credentials are automatically populated in the fields adjacent to the Authentication Enabled check box. If you do not want to enable the authentication, proceed to the next step to select the components. NOTE: If authentication is required and you do not enable it, the performance data collection fails. 6. In the respective component type lists, select the required check boxes for the components to collect their performance data. The following component type lists are displayed: 7. You can set the frequency in minutes for the each of the following components: DKC, RAID groups, and Port(s) in the respective Frequency list. DKC For an XP disk array, the DKC provides data on the CHIPS, 1 ACPs, 2 Cache, 3 Storage Logical Partition (SLPR), 4 Cache Logical Partition (CLPR), 5 and the Shared Memory (SM). 6 For an P9500/XP7 disk array, the DKC provides data on the MP blades, 7 in addition to the data on the Cache, CLPR, and the SM. Port(s) RG(s) Displays the frontend ports. Displays the array volumes, including business copy and continuous access (synchronous/asynchronous) volumes. SNAP Group(s) Displays the snapshot volumes. 8 THP Group(s) Displays the ThP volumes. 9 CA Journal Group(s) Displays the continuous access journal volumes. 10 External RG(s) Displays the external RAID Groups connected to the selected array. 1 Channel host interface processor 62 Collection Schedules

63 2 Array Control Processor (ACP) is used in the XP disk arrays prior to the XP24000 Disk Array. With the introduction of the XP24000 Disk Array, the DKA has replaced ACP. The DKA is also applicable for the P9500/XP7 disk arrays. ACP handles the transfer of data between the cache and the physical drives held in the Disk Cabinet Units (DKUs). The ACPs work in pairs, providing a total of eight SCSI buses. Each SCSI bus associated with one ACP is paired with a SCSI bus on the other ACP pair element. In the event of an ACP failure, the redundant ACP takes control. Both the ACPs work together by sharing the load. On the XP models, such as the XP10000 Disk Array, this function is handled by the DKA on the MIX board. MIX is a circuit board in the disk control unit that includes disk adapters and channel adapters for interfacing disk drives and the host to cache memory. 3 A Cache is a high speed memory that is used to speed up the I/O transaction time. All reads and writes to the XP and P9500 disk arrays are sent to the cache. The data is buffered in the cache until it is transferred to the physical disks or from the physical disks (with slower data throughput) is complete. The benefit of cache memory is that it speeds the I/Os throughput to the application. The larger the cache size, the greater the amount of data buffering that can occur and the greater throughput to the applications. In the event of power loss, the battery power maintains the contents of cache for a specified time period. 4 The SLPR is a partition of the RAID500 to which the host ports (1 or more) and the CLPRs (1 or more) are assigned. The SLPR0 will always exist (cannot be deleted). Sometimes, the SLPR acronym includes an additional word. For example, Storage administrator Logical Partition or Storage management Logical Partition, both mean the same. The purpose of the SLPR is to allow multiple administrators to manage a subsystem without the risk of causing mistakes that can destroy another user's volumes, or reduce other user's expected performance by using more components (e.g. cache) than required. 5 The cache logical partition contains cache and parity groups. It is available on the XP12000, XP10000, and later generations of the XP/XP7 disk arrays. 6 Memory that exists logically in the cache. It stores common information about the storage system and the cache management information (directory). The storage system uses this information to control exclusions and differential table information. Shared memory is managed in two segments and is used when copy pairs are created. In the event of a power failure, the shared memory is kept alive by the cache memory batteries while the data is copied to the cache flash memory (SSDs). 7 The MP blades are the microprocessor blades in the P9500/XP7 disk arrays. Each MP blade has fourmps residing on it. The MPs/DKPs that reside on the CHAs and the DKAs in the XP disk arrays form part of the MP blades inthe P9500/XP7 disk arrays. 8 The snapshot is a business copy volume type that depicts a point-in-time copy of the original primary volume. 9 Thin Provisioning is a volume management feature that is implemented by creating one or more Thin Provisioning pools (THP pools) of physical storage space using multiple LDEVs. Then, you can establish virtual THP volumes (THP V-VOLs) and connect them to the individual THP pools. In this way, capacity to support data can be randomly assigned on demand. 1 0 The Continuous Access Journal Software is an asynchronous mirroring program similar to the Continuous Access Asynchronous, except that the transactions to be written to the secondary disk array are maintained in a disk-based journal file. This provides better performance for the secondary disk array systems that are not highly available or that may be subject to bandwidth contention from other applications. NOTE: The THP Group(s), CA Journal Group(s), and the SNAP Group(s) component type lists are displayed only if the corresponding components are configured on the selected XP/XP7 disk array. The External RG(s) component type list is displayed only if the external volumes(a logical volume whose data resides on drives that are physically located outside the HPE storage system) are attached to the selected XP/XP7 disk array. Selecting a ThP, snapshot, or a continuous access journal volume also provides the respective volume pool information. To select all components, select the check box next to the component type. 8. To stagger the data collection time at different intervals, scroll down and select the Stagger Collection check box. Collection Schedules 63

64 For example, if the Stagger Collection check box is not selected, and Frequency is set to 15 minutes, performance data collection occurs every fifteen minutes, irrespective of when the schedule is created. For example, if the schedule is created at a.m., the first collection occurs immediately and the next collection occurs at the quarter of the hour, which is a.m. The subsequent collections occur at a.m., a.m., and so on. If you select Stagger Collection, performance data collection occurs every 15 minutes from the time the schedule is created. For example, if the schedule is created at a.m., the first collection occurs immediately, and the next collection is 15 minutes later, at a.m. The subsequent collections take place every 15 minutes after that. The Stagger Collection ensures that the load on the management server is balanced, because the data collection occurs for all the XP and the XP7 disk arrays at varied points of time in the day and not for every 15 minutes in an hour. 9. Select the Add new RGs and Ports to the schedules that have RG and Port components enabled check box if you want to update the new RAID groups or port components that are discovered during the scheduled performance data collection to the appropriate schedule. The newly discovered components are added to the appropriate schedule without impacting the other components included in a performance data collection schedule. 10. Click Create for the changes to take effect. The new schedule starts automatically. The following table provides the subsequent changes that occur in the Performance Data section for the selected XP/XP7 disk array record. Screen elements Schedule Name Status Performance Collection Frequency Performance Collection TimeStamp Description Displays the new schedule name. Displays the status icon as success, in progress or failed. Hover over the icon to know the status. Displays the selected frequency for the DKC, RAID groups, and the port data collection. Initially, displays 0,0,0 when a schedule is not yet configured. Displays the time when performance collection is completed for DKC, RG, and Port. When the performance data collection is in progress, you can perform the following functions. Allow PA to complete at least two performance data collection cycles before you proceed with the following tasks. This is to ensure that sufficient data is available that can be projected on the respective screens: View a graphical representation of the performance of components for different metrics and duration that you specify. View the performance summary of components in the Array Summary screen in Summary View. Configure alerts on components, so that PA can send appropriate alert notifications. Create custom groups for a set of LDEVs that you want to frequently monitor. Create and schedule reports to view the performance data of components for different metrics, and duration that you specify. In case of performance data collection failures, the appropriate failure messages are displayed on the Event Log screen. PA can also dispatch notification about the collection failure to the intended 64 Collection Schedules

65 recipient. To receive performance data collection failure notification, you must configure the appropriate notification settings. Enable performance collection schedules for automatic updates You can enable the performance data collection schedules to automatically collect the performance data for newly discovered RAID Groups and ports. The new RAID Groups and ports in an array are discovered during the scheduled configuration data collection. They are automatically added to the existing list of RAID Groups and ports in the corresponding performance data collection schedule, which is enabled to collect performance data for the new components. The newly discovered components are added to only those performance data collection schedules that are enabled for automatic updates. PA collects data for new RAID Groups and ports from the subsequent data collection cycle. The new components can also comprise of virtual volumes, such as ThP, snapshot, continuous access journals, and the external RAID Groups. The virtual volumes cannot be split across two schedules. They are automatically added to only those performance data collection schedules that are already collecting the performance data for these virtual volumes. It is irrespective of whether the performance data collection schedule is enabled to receive automatic updates. For a schedule to automatically collect data for the new set of RAID Groups and ports, in the Create/Edit Perf page, select the Add new RGs and Ports to the schedules that have RG and Port components enabled check box while creating a performance data collection schedule. The following table provides scenarios on when a schedule can be enabled for automatic updates. Scenarios Automatic updates What happens... One schedule created (Schedule 1) Enabled Schedule 1 for automatic updates. Disabled Schedule 1 for automatic updates. The newly discovered RAID Groups and ports are automatically appended to the existing list for which performance data collection is scheduled. The performance data collection continues for the new components also. The newly discovered RAID Groups and ports are not appended to the existing list of RAID Groups and ports. You have to modify the performance data collection schedule later to add or remove components from Schedule Table Continued Enable performance collection schedules for automatic updates 65

66 Scenarios Automatic updates What happens... Two schedules created (Schedule 1 and Schedule 2) Two schedules created (Schedule 1 and Schedule 2) Enabled Schedule 1 for automatic updates. Both Schedule 1 or Schedule 2 are disabled to receive automatic updates. The newly discovered RAID Groups and ports are automatically appended to the existing list for which Schedule 1 is in progress. The performance data collection continues for the new components also. The newly discovered RAID Group and port components that are not selected in any performance data collection schedule are added to the appropriate schedule. Schedule 2 is automatically disabled for automatic updates, as Schedule 1 is already enabled to receive automatic updates. Hence, you must edit the schedule manually to add or remove components from Schedule 2. However, if Schedule 1 is not enabled for automatic updates, you can still enable Schedule 2 to receive automatic updates. The newly discovered RAID Groups and ports are neither added to Schedule 1 or Schedule 2. Hence, you must edit the schedules manually to add or remove components in the existing list. Start performance data collection in case of a disk failure The performance data collection might stop on all the XP and the XP7 disk arrays connected to a host, if the command device used belongs to an array group where a disk failure occurred. To restart performance data collection: Procedure 1. Under the Host Information pane, select the host (on which performance data collection has stopped) and click Request Host Info. The status for the selected host changes to Received after the information is collected from the host. 2. Under the Config and Performance Collection table, delete and recreate the performance schedules with a new name for the selected host. The schedule is enabled automatically and the performance data collection begins. The previous collection data is still retained. View performance data collection schedules After creating a schedule, click the View Perf. The list of selected components, respective data collection frequencies, and the command device chosen for data collection are displayed. In addition, the port type, such as FCoE (applicable only for P9500/XP7 disk arrays) is also displayed for a port. NOTE: The View Perf button is enabled only when you select an array record in the Config and Performance Collection table. Edit performance data collection schedules You can add or remove components from an existing performance data collection schedule, and edit the frequency of data collection. When you edit a performance data collection schedule, you might notice 66 Start performance data collection in case of a disk failure

67 missing data points for components in the subsequent collection cycle before PA starts collecting data for the new set of components and frequency. Procedure 1. From the main menu, click Collections. 2. In the Config and Performance Collection table, select the array record for which you want to modify the associated performance data collection schedule. 3. Click Create/Edit Perf. The schedule details appear and the selected components are displayed in the respective component type boxes. 4. Modify the schedule settings as required. 5. To commit the changes, click Create. The updated frequency is displayed under Frequency. In the subsequent data collection cycle, HPE XP7 Performance Advisor collects data for the new set of components as per the new frequency. Stop performance data collection schedules In the Config and Performance Collection table, select the XP/XP7 disk array record, and click Stop Perf. PA stops the collection from the next collection cycle. The current performance data collection schedule stops only after the current data collection is complete, as per the selected collection schedule. For example, if you had configured an hourly collection at 11:00 a.m and stopped the schedule at 11:30 a.m., the current performance data collection still continues as per the selected collection schedule and ends only at 12:00 p.m. Further data collections are not performed till you restart the schedule. The status icon displays the indication that performance collection schedule has stopped for the selected array record. NOTE: To restart a performance data collection schedule click Start. PA resumes the data collection for the same set frequency on the selected array components, and the status icon appears in green. Delete performance data collection schedules Select the XP/XP7 disk array record, and click Delete Perf. The performance data collection schedule is permanently deleted. About communicating with host agents You must first install and configure the host agent on the host. To install and configure host agents, refer the HPE XP7 Performance Advisor Software Installation Guide. PA then displays the host agents in the Host Information table under the Host Information pane. Hover over the status icon to know the current status of a host agent. The status icon changes to green (when you hover over, it displays as Received) after PA retrieves the requested information from the host agent. When the host agents are first installed on the hosts, the current status is displayed as Received. NOTE: To view updated RMLIB host agent versions and new array types, select the upgraded host entry and request a host update. Stop performance data collection schedules 67

68 Request host agent updates Prerequisites Ensure that the version of the host agent installed on the host matches with the version of PA installed on the management station. Ensure that the command devices are already created on the arrays connected to your host, and configured to communicate with the host. If not already created, the corresponding RAID Manager Library version is not displayed when you request for a host update. In such cases, do the following: 1. Create command devices on the arrays. For more information on creating command devices, refer to the HPE XP7 Performance Advisor Software Installation Guide. 2. Associate the command device with your host agents. 3. Request an update on your host agents. Requesting host agent update 1. From the main menu, click Collections. 2. To update the host information, in the Host Information pane, click Request Host Info. The Request Info button is enabled only when you select the host agents. Use the Shift or the Ctrl key to select multiple host agent records. The request is executed in the subsequent data collection cycle. Following is the sequence of events that occur for the selected host agent: a. PA retrieves the updated information from the host agent. This may take a few minutes depending on the number of LDEVs that are exposed to the host agent. b. After PA has retrieved the latest information from the host agent, the Status changes to. c. The latest timestamp is displayed under Updated. Click refresh icon to manually refresh the Host Information table. If there are any configuration changes in the associated XP and the XP7 disk arrays, PA also updates these details on the relevant screens. For example, if the updated information is about a new command device, an additional record for the new command device is also displayed under the Configure Information pane in Summary View > Array Summary. To collect configuration data from a reconfigured XP or XP7 disk array, perform the above-mentioned steps. During the next data collection cycle, the host collects the configuration data again from the reconfigured XP or XP7 disk array and displays in the Array Summary screen. This process avoids inconsistencies in the performance data collected for the reconfigured XP or XP7 disk arrays. Remove host agent information If an array is connected to two host agents, both of which communicate with the same management station and one of the host agent record is removed, the configuration and the performance data for that array is still available in PA. This is because, that array is still connected to the other host agent. You can purge the existing configuration and performance data for those arrays that are no longer monitored by PA. 68 Request host agent updates

69 Prerequisites Ensure that the Status of the host agent is green. You cannot remove a host agent record when its status is requested. Procedure 1. From the main menu, navigate to Collections > Host Information. 2. Select the host agent records that you want to remove from PA. 3. Click Remove Host. The Remove Host button is enabled only when you select a host agent record. PA deletes the host agent record and logs a confirmation on the Event Log screen. When you remove a host agent, information about the command devices and all the information pertaining to the arrays connected to that host agent are also removed. Initiate a new configuration collection on the arrays, and recreate or reconfigure the following parameters: Configuration data Performance schedules Report schedules Alert configuration data Custom groups If you want to view the host agent records again, restart the host agent services on that host. A record for the host agent is automatically displayed under the Host Information pane. For more information on restarting host agent services, refer to the HPE XP7 Performance Advisor Software Installation Guide. Collection Schedules 69

70 Monitor disk arrays PA provides a dashboard, where you can view the overall usage status of the XP and XP7 disk arrays. The overall usage status is based on the usage of individual components. About dashboards PA enables you to identify critical arrays and its components from the dashboard-level of screens, and thereby drill down to the critical components. The Overview dashboard flags the arrays that are critical and also depicts the metrics of critical components in the form of donuts and bar charts. The following figure displays how donuts form as links connecting the Overview dashboard to the other levels of dashboards, and the component screens. If at any time a metric or a component has crossed the threshold limit in an array or is at 95% or higher of the set threshold limit for a specified duration, then the status for the given array changes to Critical or Warning. For example, if the IOPS metric for a threshold duration of 6 hours, from 6 A.M. to 12 P.M., indicates crossing the threshold value at 10 A.M. and recovering back to the normal state at 11 A.M., PA still flags the metric as Critical or Warning in the dashboard, as there was a leap in the data at a point in the specified time range. The number of arrays depicted above the donut charts represents the same data as in the donut charts. Clicking either on the donut or the numbers above it with status icons, navigates you to the respective levels of screens. By default, the donut charts display only the critical arrays inside the circle. To view other statuses, hover over the text inside the donut chart, and the selection changes accordingly. The Component dashboard displays the top X consumers graphs by each component category for the selected array. Click the graph title to display only the top X components in the respective component screen. Clicking any of the individual bars in the graph redirects you to the respective component screen, displaying the performance and utilization graphs for the selected component. Overview dashboard The Overview dashboard displays the details at overall storage level in the form of widgets. The Overview dashboard widgets are Performance, Component, Continuous Access, and Capacity. During the threshold duration, if a metric or a component crosses the threshold limit in an array, then the status of that array changes to Critical or Warning state. For example, if the IOs metric for a threshold duration of 6 hours (from 6 A.M. to 12 P.M.) indicates crossing the threshold at 10 A.M. and recovering to normal state at 11 A.M. PA flags the metric to be either in the Critical or the Warning state. NOTE: PA has set default threshold values for metrics such as the response time. You can modify the values according to your environment. 70 Monitor disk arrays

71 The Overview dashboard displays the following: Donut and tabular data: Displays the statistics of the array, such as total number of IOs, total data transfer in MBs for all the arrays managed by PA. When you hover the mouse over a donut, the corresponding information is displayed inside the circle for an array. For further analysis at the individual component level, click a donut to go to the respective screen. Bar graphs: Displays the maximum and average information of the respective metrics for all the arrays managed by PA. Status icon: Displays the arrays in Ok, Critical, Warning, or Unknown state. The health status in each of the widgets is determined based on the threshold duration and values set for component metrics in the Threshold Settings screen. The default threshold duration for monitoring array performance is 6 hours. Number: Displays the total number of arrays in a particular state. For further analysis at the individual component level, click a status icon or a number to go to the corresponding dashboard screen. The respective dashboard screen sorts and lists the array based on your selection of the status icon or the number. If you click the Critical icon or the number representing the array in the Critical state, the corresponding dashboard displays the critical arrays on the top. It also displays the consolidated data of the first array in the list. Monitor disk arrays 71

72 Overview dashboard screen details Table 9: Performance Screen elements Description Related tasks LDEV Avg Frontend IO Displays the number of arrays in Ok, Critical, Warning, or Unknown state. This state is based on the threshold value of Ldev Frontend IOs that is set in the Threshold Setting screen. The donut and the tabular data display average IOs received by each array. To view the detailed usage statistics of associated components in an array, perform one of the following: Click the donut to go to the Array screen. The Array screen displays the information of the selected array. Click a status icon or a number to go to Performance dashboard. The Performance dashboard screen displays the top 10 consumers of the selected array. LDEV Avg Frontend Throughput Displays the number of arrays in Ok, Critical, Warning, or Unknown state. This state is based on the threshold value of the data transfer rate in Ldev Frontend MBs that is set in the Threshold Setting screen. The donut and the tabular data display the average of data that is received and written or read by each array. To view the detailed usage statistics of associated components in an array, perform one of the following: Click the donut to go to the Array screen. The Array screen displays the information of the selected array. Click a status icon or a number to go to Performance dashboard. The Performance dashboard screen displays the top 10 consumers of the selected array. LDEV Response Time Displays the number of arrays in Ok, Critical, Warning, or Unknown state. This state is based on the threshold value of average read and write response time of LDEVs that is set in the Threshold Setting screen. The graph displays the average and maximum response time for the arrays managed by PA. The maximum response time is the maximum value among all maximum read and write response times of all the LDEVs in the array during the set duration. The average response time is the average of the average read and To view the detailed usage statistics of associated components in an array, perform one of the following: Click on the chart to go to the Array screen. The Array screen displays the information of the selected array. Click a status icon or a number to go to Performance dashboard. The Performance dashboard screen displays the top 10 consumers of the selected array. 72 Overview dashboard screen details

73 Screen elements Description Related tasks Table 10: Component average write response times of the LDEVs in the array. Average read response time is the average time taken in millisecond for read IOs from the time it is received in the array host port to the time it is processed inside the array and has returned the data to array host port. Average write response time is the average time taken in terms of millisecond for write IOs from the time it is received in the array host port to the time it is processed inside the array and has returned the acknowledgment to array host port. Screen elements Description Related tasks Ports Avg IOPs Displays the number of arrays in Ok, Critical, Warning, or Unknown state. This status is based on the threshold value of Frontend IOss that is set in the Threshold Setting screen. The donut and the tabular data display the average IOs received by each array. To view the detailed usage statistics of associated components in an array, perform one of the following: Click the donut to go to the Ports screen. The Ports screen displays the information of the selected array. Click a status icon or a number to go to Component dashboard. The Component dashboard screen displays the top 10 consumers of the selected array. Port Avg Throughput Displays the number of arrays in Ok, Critical, Warning, or Unknown state. This state is based on the threshold value of the Frontend MBPS that is set in the Threshold Setting screen. The donut and the tabular data display the average data transferred for each array. To view the detailed usage statistics of associated components in an array, perform one of the following: Click the donut to go to the Ports screen. The Ports screen displays the information of the selected array. Click a status icon or a number to go to Component dashboard. The Component dashboard screen displays the top 10 consumers of the selected array. Table Continued Monitor disk arrays 73

74 Screen elements Description Related tasks XP7 Port Response Time Cache Write Pending Displays the graphical representation of the average and maximum response time of ports in the XP7 arrays managed by PA. Displays the number of arrays in Ok, Critical, Warning, or Unknown state. This state is based on the threshold value of Cache write pending utilization percentage that is set in the Threshold Setting screen. The graph displays the average and maximum cache write pending utilization percentage of each array managed by PA. Click the donut to go to the Ports screen. The Ports screen displays the information of the selected array. To view the detailed usage statistics of associated components in an array, perform one of the following: Click on the chart to go to the Cache screen. The Cache screen displays the information of the selected array. Click a status icon or a number to go to Component dashboard. The Component dashboard screen displays the top 10 consumers of the selected array. Processor Utilization Displays the number of arrays in Ok, Critical, Warning, or Unknown state. This state is based on the threshold value of MBP/CHA/DKA utilization that is set in the Threshold Setting screen. The graph displays the average and maximum processor utilization percentage of each array managed by PA. To view the detailed usage statistics of associated components in an array, perform one of the following: Click on the chart to go to the Processor screen. The Processor screen displays the information of the selected array. Click a status icon or a number to go to Component dashboard. The Component dashboard screen displays the top 10 consumers of the selected array. RG Utilization Displays the number of arrays in Ok, Critical, Warning, or Unknown state. This state is based on the threshold value of backend metrics and RG utilization percentage that is set in the Threshold Setting screen. The graph displays the average and maximum RG utilization is the drive usage in terms of percentage of each array managed by PA. To view the detailed usage statistics of associated components in an array, perform one of the following: Click on the chart to go to the Raid Group screen. The Raid Group screen displays the information of the selected array. Click a status icon or a number to go to Component dashboard. The Component dashboard screen displays the top 10 consumers of the selected array. 74 Monitor disk arrays

75 Table 11: Continuous Access Screen elements Description Related tasks Max Recovery Point Objective Displays the number of arrays in Ok, Critical, Warning, or Unknown state. This state is based on the threshold value of CA recovery point objective that is set in the Threshold Setting screen. The graph displays the maximum recovery point of each array managed by PA. The Recovery Point Objective (RPO) is the difference between the data write time on the primary and secondary volumes. It is represented in terms of seconds. To view the detailed usage statistics of associated components in an array, perform one of the following: Click the graph to go to the Journals screen. The Journals screen displays the information of the selected array. Click a status icon or a number to go to Continuous Access dashboard. The Continuous Access dashboard screen displays the top 10 consumers of the selected array. PVOL Response Time Displays the number of arrays in Ok, Critical, Warning, or Unknown state. This state is based on the threshold value of average read response time and average write response that is set in the Threshold Setting screen. The graph displays the maximum and average PVOL response time of each array. To view the detailed usage statistics of associated components in an array, perform one of the following: Click the graph to go to the Ldevs screen. The Ldevs screen displays the information of the selected array. Click a status icon or a number to go to Continuous Access dashboard. The Continuous Access dashboard screen displays the top 10 consumers of the selected array. Status Displays the number of arrays in Ok, Critical, Warning, or Unknown state depending on CA link status. CA link is a link between the primary and secondary array for data transfer. Displays the number of arrays in Ok, Critical, Warning, or Unknown state depending on the CA pair status. CA pair status is the current replication status of the P-VOL or S- VOL on the selected XP, P9500 or XP7 disk array. The replication status shown is corresponding to the continuous access transactions occurring on the selected XP, P9500 or XP7 disk array. Click a status icon or a number to go to Continuos Access dashboard. The Continuous Access dashboard screen displays the top 10 consumers of the selected array. Click a status icon or a number to go to Continuous Access dashboard. The Continuous Access dashboard screen displays the top 10 consumers of the selected array. Table Continued Monitor disk arrays 75

76 Screen elements Description Related tasks Port AvgThroughput (MB) PVOL Data Transfer Rate (MB) Displays the graphical representation of total data transfer through CA ports(ca Initiator and RCU target) for each array managed by PA. Displays the graphical representation of aggregate value of Host Throughput vs Async Transfer rate. Click the graph to go to the Ports screen. The Ports screen displays the information of the selected array. Click the graph to go to the Journals screen. The Journals screen displays the information of the selected array. 76 Monitor disk arrays

77 Table 12: Capacity Screen component Description Related task Physical Total Capacity Pool Capacity Efficiency Drive Capacity Efficiency Displays the donut representing the total physical capacity in TB of installed Raid Groups in an array. The tabular data displays the Total, Used, and Free Physical Total capacity of each array. Displays the efficiency of all the pools configured in the monitored disk arrays, in terms of pool used capacity and savings by the capacity expansion techniques like FMD compression, Deduplication and DKC compression. Dedup: Displays the pool capacity saving in terms of percentage by the deduplication capacity savings function. DKCComp: Displays the pool capacity savings in terms of percentage by the DKC compression saving function. FMD compression: Displays the pool capacity savings in terms of percentage by the FMD data compression function. Compaction Ratio:The compaction ratio is a ratio of physical storage space which a volume consumes in a thin pool compared to its virtual size when a capacity saving enabled functions like DKC compression and deduplication are used or FMD drives with inline compression turned on are used. Displays the total physical drive used capacity in TB and in terms of percentage, compare to the total physical drive capacity of each drive type configured across the monitored disk arrays. Click the donut to go to Capacity dashboard screen. The Capacity dashboard screen displays the capacity details of the selected array. Component dashboard The Component dashboard displays the status of arrays based on the Frontend, Cache, Processor, and Backend. The arrays are in Ok, Critical, or Warning state based on the usage of the component in that category. The usage data is collected for threshold limits that are set in the Threshold Setting screen for a specified threshold duration. The main pane displays the arrays managed by PA and the description pane displays the usage statistics of top 10 consumers of the selected array (by default, the graphs are Component dashboard 77

78 plotted for top 10 consumers and data points collected over 6 hours ). You can change the number of top components from Actions > Dashboard Settings.. Following are the graphs displaying the usage statistics of top 10 components: Top 10 Ports by Avg IOPs Top 10 Ports by Avg Throughput Cache Write Pending by CLPR Cache Size and Avg Cache Usage by CLPRs Top 10 Processors by Utilization Top 10 Internal RG by Avg Utilization Top 10 Internal RG by Max Utilization You can view the corresponding components and their usage statistics by clicking the following: Graph titles: You are taken to the corresponding component screen that lists top 10 respective components in the array for that category. It also displays the usage statistics of the first component. For example, when you click the Top 10 Processor by Utilization, you navigate to the Processors screen that displays the list of top 10 Processors in the array with maximum Processor Utilization. Graph bar: You are taken to the corresponding component screen that displays the usage statics of the selected component in the array. For example, when you click a bar in the Top 10 Processor by Utilization graph, you navigate to the Processors screen that displays the usage statistics of that Processor in the array. Status icon or number: You are taken to the corresponding component screen that sorts and displays the list of the components for that category. For example, when you click the Top 10 Processor by Utilization, you navigate to the Processors screen. This screen sorts and lists Processors in the array with maximum Processor Utilization, and displays the usage statistics of the first Processor in the list. You can click either Component from the HPE XP7 Performance Advisor, or the status icon or the corresponding number in the Component widget on the Overview dashboard to view the Component dashboard. Refresh the dashboard screen for updated dashboard status. 78 Monitor disk arrays

79 Component dashboard screen details Screen elements Description Related tasks Frontend Cache Processor Backend Top 10 Ports by IOPS Displays the number of arrays in Ok, Critical, or Warning state. This state is based on the threshold value of the ports and ports-related metrics that is set in the Threshold Setting screen. Displays the number CLPRs in the array that are in Ok, Critical, or Warning state. This state is based on the threshold value of the cache usage percentage and cache writes pending utilization metrics that are set in the Threshold Setting screen. Displays the number MPB/CHA/DKA in the arrays that are in Ok, Critical, or Warning state. This state is based on the threshold value of the MP blade utilization metrics of the P9500/XP7 arrays and the Backend DKA utilization metrics of the XP24k/ XP20k arrays. The threshold values of these metrics are set in the Threshold Setting screen. Displays the number of Raid Groups in the array that are in Ok, Critical, or Warning state. This state is based on the threshold value of the Raid Group utilization that is set in the Threshold Setting screen. Displays the top 10 ports depending on the sum of average IOPS. To view the statistics of the ports depending on the ports-related metrics, click the status icon or the corresponding number. The Ports screen displays the usage statistics of the ports in the array. To view the statistics of the CLPRs depending on the cache usage percentage and cache writes pending utilization, click the status icon or the corresponding number. The Cache screen displays the usage statistics of the CLPRs in the array. To view the statistics of the MPB/CHA/DKA depending on the utilization metrics, click the status icon or the corresponding number. The Processors screen displays the usage statistics of the MPB/CHA/DKA in the array. To view the statistics of the raid groups depending on the utilization metrics, click the status icon or the corresponding number. The Raid Groups screen displays the usage statistics of the MPB/CHA/DKA in the array. To view top 10 ports depending on the IOsTop 10 Ports by IO. The Ports screen displays the usage statistics of the top 10 ports. If the value of the top consumer is set to 20, then the Ports screen displays the statistics for top 20 ports. When you click a bar, the Ports screen displays the usage statistics of that port in the array. Table Continued Component dashboard screen details 79

80 Screen elements Description Related tasks Top 10 Ports by Throughput Cache Write Pending by CLPRs Cache Size and Avg Cache Usage by CLPRs Top10 Processors by Utilization Displays the top 10 ports in an array depending on the sum of average MBs. Displays the top 10 CLPR in an array depending on Cache average and maximum write pending utilization. Displays the of CLPRs in an array depending on the cache size and the cache usage. Displays the top 10 processors in an array depending on the maximum Raid Group utilization percentage. To view top 10 ports depending on the throughput, click Top 10 Ports by Avg Throughput. The Ports screen displays the usage statistics of the top 10 ports. If the value of the top consumer is set to 20, then the Ports screen displays the statistics for top 20 ports. When you click a bar, the Ports screen displays the usage statistics of that port in the array. To view top 10 CLPRs depending on the cache write pending, click Cache Write Pending by CLPRs. The Cache screen displays the usage statistics of the top 10 CLPRs. If the value of the top consumer is set to 20, then the Cache screen displays the statistics for top 20 CLPRs. When you click a bar, the Cache screen displays the usage statistics of that CLPR in the array. To view top 10 CLPRs depending on the cache usage and size, click Cache Size and Avg Cache Usage by CLPRs. The Cache screen displays the usage statistics of the top 10 CLPRs. If the value of the top consumer is set to 20, then the Cache screen displays the statistics for top 20 CLPRs. When you click a bar, the Cache screen displays the usage statistics of that CLPR in the array. To view top 10 processors depending on the maximum raid group utilization, click Top10 Processors by Utilization. The Processors screen displays the usage statistics of the top 10 processors. If the value of the top consumer is set to 20, then the Processors screen displays the statistics for top 20 processors. When you click a bar, the Processors screen displays the usage statistics of that processor in the array. Table Continued 80 Monitor disk arrays

81 Screen elements Description Related tasks Top 10 raid group by Avg Utilization Top 10 Internal RG by Max Utilization Displays the top 10 raid groups in an array depending on average utilization. Displays the top10 Internal Raid Groups with maximum utilizations. To view top 10 raid groups depending on the average utilization, click Top 10 RG by Avg Utilization. The Raid Groups screen displays the usage statistics of the top 10 raid groups. If the value of the top consumer is set to 20, then the Raid Groups screen displays the statistics for top 20 raid groups. When you click a bar, the Raid Groups screen displays the usage statistics of that raid group in the array. To view top 10 raid groups depending on the maximum utilization, click Top 10 RG by Max Utilization. The Raid Groups screen displays the usage statistics of the top 10 raid groups. If the value of the top consumer is set to 20, then the Raid Groups screen displays the statistics for top 20 raid groups. When you click a bar, the Raid Groups screen displays the usage statistics of that raid group in the array. Continuous Access dashboard Continuous Access dashboard displays the status of the arrays based on the CA link status and CA pair status. The arrays are in Ok, Critical, or Warning state based on the usage of the component in that category. The status of the same array can be Critical depending on the CA link status and Ok depending on the CA pair status. The main pane displays the arrays managed by PA and the description pane displays the usage statistics of top 10 primary components of the selected array (by default, the graphs are plotted for top 10 components and data points collected over 6 hours ). You can change the number of top components from Actions > Dashboard Settings.. Following are the graphs displaying the usage statistics of top 10 components: Top 10 Journals RPO Top 10 PVoLs Avg Response Time Top 10 PVol Journals Data Transfer Rate Top 10 CA Port Throughput Top 10 Pvols Max Response Time You can view the corresponding components and their usage statistics by clicking the following: Graph titles: You are taken to the corresponding component screen that lists top10 respective components in the array for that category. It also displays the usage statistics of the first component. For example, when you click the Top 10 Journals Data Transfer Rate, you are taken to the Journals screen that displays the list of top X journals with higher data transfer rate. Graph bar: You are taken to the corresponding component screen that displays the usage statics of the selected component in the array. For example, when you click a bar in the Top 10 Journals Data Continuous Access dashboard 81

82 Transfer Rate graph, you are taken to the Journals screen that displays the usage statistics of that Journal in the array. Status icon or number: You are taken to the corresponding component screen that sorts and displays the list of the components for that category. For example, when you click the Critical icon or the corresponding number in CA pair status, you are taken to Continious Access screen. The Continuous Access screen sorts and lists the critical arrays on the top and displays the information of the first array in the list. You can click either Continuous Access in Dashboards, from the HPE XP7 Performance Advisor, or the status icon or the corresponding number in the Continuous Access widget on the Overview dashboard to go to the Continuous Access dashboard. Refresh the dashboard screen for updated dashboard status. Continuous Access dashboard screen details Screen elements Description Related task Top 10 Journals RPO Top 10 PVoLs Avg Response Time Top 10 PVols Max Response Time Display the top 10 Journals in the array with maximum RPO. Displays the top 10 PVoLs in the array depending on the average response time. Displays the top 10 PVoLs in the array depending on the maximum response time. To view the top 10 Journals with maximum RPO, click RPO Top 10 Journals. The Journals screen displays the usage statistics of the top10 journals of the array. If the value of the top consumer is set to 20, then the Journals screen displays the statistics for top 20 journals with maximum RPO. When you click a bar, the Journals screen displays the usage statistics of that journal in the array. To view top 10 PVOLs depending on the average response time, click Top 10 PVoLs Avg Response Time. The Ldevs screen displays the usage statistics of the top 10 LDEVs that are configured as PVOLs in the array. If the value of the top consumer is set to 20, then the Ldevs screen displays the statistics for top 20 PVOL. When you click a bar, the Ldevs screen displays the usage statistics of that PVOL in the array. To view top 10 PVOLs depending on the maximum response time, click Top 10 PVoLs Max Response Time. The Ldevs screen displays the usage statistics of the top 10 LDEVs that are configured as PVOLs in the array. If the value of the top consumer is set to 20, then the Ldevs screen displays the statistics for top 20 PVOL. When you click a bar, the Ldevs screen displays the usage statistics of that PVOL in the array. Table Continued 82 Continuous Access dashboard screen details

83 Screen elements Description Related task Top 10 Journals Data Transfer Rate Top 10 CA Port Throughput Status Displays top 10 journals in the array depending on the aggregate value of Host Throughput vs Async Transfer Rate. Displays the top 10 ports in the array depending on the sum of MBs. Displays the CAs in the arrays Ok, Critical, or Warning state, depending on CA link status. CA link is a link between the primary volume and secondary volume for data transfer. Displays the number of CAs in the array are in Ok, Critical, or Warning state, depending on the CA pair status. To view top 10 journals depending on the data transfer data, click Top 10 Journals Data Transfer Rate. The Journals screen displays the detailed usage statistics of the top 10 journals. If the value of the top consumer is set to 20, then the Journals screen displays the statistics for top 20 journals. When you click a bar, the Journals screen displays the usage statistics of that journal in the array. To view top 10 ports depending on the data transfer data, click Top 10 CA Port Throughput. The Ports screen displays the usage statistics of the top 10 journals. If the value of the top consumer is set to 20, then the Ports screen displays the statistics for top 20 ports. When you click a bar, the Ports screen displays the usage statistics of that port in the array. To view the statistics of the CAs depending on CA link status, click the status icon or the corresponding number. The Continuous Access screen displays the usage statistics of the CAs in the array. To view the statistics of the CAs depending on CA pair status, click the status icon or the corresponding number. The Continuous Access screen displays the usage statistics of the CAs in the array. Performance dashboard The Performance dashboard screen displays the status of the array depending on the LDEVs IO/s, LDEVs MB/s, LDEV Avg Read Response time, and LDEV Avg Write Response time. The arrays are in Ok, Critical, or Warning state based on the usage of the component in that category. The usage data is collected for threshold limits that are set in the Threshold Setting screen for a specified threshold duration. The main pane displays the arrays managed by PA and the description pane displays the usage statistics of top 10 primary components of the selected array (by default, the graphs are plotted for top 10 components and data points collected over 6 hours ). You can change the number of top components from Actions > Dashboard Settings.. Following are the graphs displaying the usage statistics of top 10 components: Top 10 ldev by Avg IOPS Top 10 ldev by Avg Throughput Top 10 ldev Avg Response time Top 10 ldev by Max Response time Performance dashboard 83

84 Top 10 hostgroup by Avg IOPS Top 10 HhostgroupGs by Avg Throughput Top 10 hostgroup by Average Response time Top 10 hostgroup by Max Response time Top 10 port by Avg IOPS Top 10 port by Avg Throughput Top 10 pool by Avg IOPS Top 10 pool by Avg Throughput Top 10 pool by Avg response time Top 10 pools by Max Response time Top 10 Internal raidgroup by Avg Utilization Top 10 Internal raidgroup by Max Utilization You can view the corresponding components and their usage statistics by clicking the following: Graph titles: You are taken to the corresponding component screen that lists top 10 respective components in the array for that category. It also displays the usage statistics of the first component. For example, when you click the Top 10 ldev By Avg Response Time, you are taken to the Ldevs screen. This screen displays the list of top 10 Ldevs in the arrays with higher average response time. Graph bar: You taken to the corresponding component screen that displays the usage statics of the selected component in the array. For example, when you click a bar in the Top 10 ldev By Avg Response Time graph, you are taken to the Ldevs screen. This screen displays usage statistics of that Ldev in the array. Status icon or number: You taken to the corresponding component screen that sorts and displays the list of the components for that category. For example, when you select the Critical icon status or the corresponding number in the LDEV IO, you taken to the Ldevs screen. This screen sorts and lists the arrays with critical Ldevs on the top, and displays the usage statistics of the first Ldev in the list. You can click either Performnce from the HPE XP7 Performance Advisor, or the status icons or numbers in the Performance widget on the Overview dashboard to go to the Performance dashboard. Refresh the dashboard screen for updated dashboard status. 84 Monitor disk arrays

85 Performance dashboard screen details Screen elements Description Related task LDEV IO/s LDEV MB/s LDEV Avg Read Response LDEV Avg Write Response Top 10 Ldev by Avg IOPS Top 10 Ldev by Avg Throughput Displays the number of LDEVs in the array that are in Ok, Critical, or Warning state. This state is based on the threshold value of the Ldev Frontend IO that is set in the Threshold Setting screen. Displays the number of Ldevs in the array that are in Ok, Critical, or Warning state. This state is based on the threshold value of Ldev Frontend MB that is set in the Threshold Setting screen. Displays the number of Ldevs in the array that are in Ok, Critical, or Warning state. This state is based on the threshold value of the average read response time that is set in the Threshold Setting screen. Displays the number of Ldevs in the array that are in ok, critical, or might require attention state. This state is based on the threshold value of the average write response time that is set in the Threshold Setting screen. Displays the top 10 LDEVs in the array depending on the average of IOPS. Displays the top 10 Ldevs in the array depending on the average of MBPS. To view the statistics of the Ldevs depending on the frontend IOs, click the status icon or the corresponding number. The Ldevs screen displays the usage statistics of the Ldevs in the array. To view the statistics of the Ldevs depending on the frontend MBs, click the status icon or the corresponding number. The Ldevs screen displays the usage statistics of the Ldevs in the array. To view the statistics of the Ldevs depending on the average read response time, click the status icon or the corresponding number. The Ldevs screen displays the usage statistics of the Ldevs in the array. To view the statistics of the Ldevs depending on the average write response time, click the status icon or the corresponding number. The Ldevs screen displays the usage statistics of the Ldevs in the array. To view top 10 Ldevs depending on the average IOPS, click Top 10 Ldev by Avg IOPS. The Ldevs screen displays the usage statistics of the top 10 Ldevs. If the value of the top consumer is set to 20, then the Ldevs screen displays the statistics for top 20 Ldevs. When you click a bar, the Ldevs screen displays the usage statistics of that Ldev in the array. To view top 10 Ldevs depending on the average throughput, click Top 10 Ldev by Avg Throughput. The Ldevs screen displays the usage statistics of the top 10 Ldevs. If the value of the top consumer is set to 20, then the Ldevs screen displays the statistics for top 20 Ldevs. When you click a bar, the Ldevs screen displays the usage statistics of that Ldev in the array. Table Continued Performance dashboard screen details 85

86 Screen elements Description Related task Top 10 Ldev by Avg Response Time Top 10 Ldev by Max Response Time Top 10 hostgroup by Avg IOPS Top 10 hostgroup by Avg Throughput Displays the top 10 Ldevs in the array depending on the average response time. It includes all the read and write time taken by Ldevs. Displays the top 10 LDEVs in the array with maximum response time of read and write operation. Displays the top 10 Host Groups in the array depending on the average of IOPS. Displays the top 10 Host Groups in the array depending on the average of MBPS. To view top 10 Ldevs depending on the average response time, click Top 10 Ldev by Avg Response Time. The Ldevs screen displays the usage statistics of the top 10 Ldevs. If the value of the top consumer is set to 20, then the Ldevs screen displays the statistics for top 20 Ldevs. When you click a bar, the Ldevs screen displays the usage statistics of that Ldev in the array. To view top 10 Ldevs depending on the maximum response time, click Top 10 Ldev by Max Response Time. The Ldevs screen displays the usage statistics of the top 10 Ldevs. If the value of the top consumer is set to 20, then the Ldevs screen displays the statistics for top 20 Ldevs. When you click a bar, the Ldevs screen displays the usage statistics of that Ldev in the array. To view top 10 hostgroups depending on the average IOPS, click Top 10 hostgroup by AVG IOPS. The Host Groups screen displays the usage statistics of the top 10 hostgroups. If the value of the top consumer is set to 20, then the Host Groups screen displays the statistics for top 20 hostgroups. When you click a bar, the Host Groups screen displays the usage statistics of that hostgroup in the array. To view top 10 hostgroups depending on the average IOPs, click Top 10 hostgroup by Avg Throughput. The Host Groups screen displays the usage statistics of the top 10 hostgroups. If the value of the top consumer is set to 20, then the Host Groups screen displays the statistics for top 20 hostgroups. When you click a bar, the Host Groups screen displays the usage statistics of that hostgroup in the array. Table Continued 86 Monitor disk arrays

87 Screen elements Description Related task Top 10 hostgroup by Avg Response Time Top 10 hostgroup by Max Response Time Top 10 port by Avg IOPS Top 10 port by Avg Throughput Top 10 pool by Avg IOPS Displays the top 10 Host Groups in the array depending on the average response time. It includes all the read and write time taken by Host Groups. Displays the top 10 Host Groups in the array with maximum response time of read and write operation. Displays the top 10 Ports in the array depending on the average of IOPS. Displays the top 10 Ports in the array depending on the average of MBPS. Displays the top 10 Pools in the array depending on the average of IOPS. To view top 10 hostgroups depending on the average response time, click Top 10 hostgroup by Avg Response Time. The Host Groups screen displays the usage statistics of the top 10 hostgroups. If the value of the top consumer is set to 20, then the Host Groups screen displays the statistics for top 20 hostgroups. When you click a bar, the Host Groups screen displays the usage statistics of that hostgroup in the array. To view top 10 hostgroups depending on the maximum response time, click Top 10 hostgroup by Max Response Time. The Host Groups screen displays the usage statistics of the top 10 hostgroups. If the value of the top consumer is set to 20, then the Host Groups screen displays the statistics for top 20 hostgroups. When you click a bar, the Host Groups screen displays the usage statistics of that hostgroup in the array. To view top 10 ports depending on the average IOPS, click Top 10 port by Avg IO. The Ports screen displays the usage statistics of the top 10 ports. If the value of the top consumer is set to 20, then the Ports screen displays the statistics for top 20 ports. When you click a bar, the Ports screen displays the usage statistics of that port in the array. To view top 10 ports depending on the average throughput, click Top 10 Port by Avg Throughput. The Ports screen displays the usage statistics of the top 10 ports. If the value of the top consumer is set to 20, then the Ports screen displays the statistics for top 20 ports. When you click a bar, the Ports screen displays the usage statistics of that port in the array. To view top 10 pools depending on the average IOPS, click Top 10 pool by Avg IOPS. The Pools screen displays the usage statistics of the top 10 pools. If the value of the top consumer is set to 20, then the Pools screen displays the statistics for top 20 pools. When you click a bar, the Pools screen displays the usage statistics of that pool in the array. Table Continued Monitor disk arrays 87

88 Screen elements Description Related task Top 10 pool by Avg Throughput Top 10 pool by Avg Response Time Top 10 pool by Max Response Time Top 10 raidgroup by Avg Utilization Top 10 raidgroup by Max Utilization Displays the top 10 Pools in the array depending on the average of MBPS. Displays the top 10 Pools in the array depending on the average response time. Displays the top 10 Pools in the array with maximum response time. Displays the graphical representation of top 10 Raid Groups in the array depending on the average utilization. Displays the graphical representation of top 10 Raid Groups in the arrays with maximum utilization. To view top 10 pools depending on the average throughput, click Top 10 pool by Avg Throughput. The Pools screen displays the usage statistics of the top 10 pools. If the value of the top consumer is set to 20, then the Pools screen displays the statistics for top 20 pools. When you click a bar, the Pools screen displays the usage statistics of that pool in the array. To view top 10 pools depending on the average response time, click Top 10 Pool by Avg Response Time. The Pools screen displays the usage statistics of the top 10 pools. If the value of the top consumer is set to 20, then the Pools screen displays the statistics for top 20 pools. When you click a bar, the Pools screen displays the usage statistics of that pool in the array. To view top 10 pools depending on the maximum response time, click Top 10 Pool by Max Response Time. The Pools screen displays the usage statistics of the top 10 pools. If the value of the top consumer is set to 20, then the Pools screen displays the statistics for top 20 pools. When you click a bar, the Pools screen displays the usage statistics of that pool in the array. To view top 10 raidgroups depending on the average response time, click Top 10 raidgroup by Avg Utilization. The Raid Group screen displays the usage statistics of the top 10 raidgroups. If the value of the top consumer is set to 20, then the Raid Group screen displays the statistics for top 20 raidgroups. When you click a bar, the Raid Group screen displays the usage statistics of that raidgroup in the array. To view top 10 raidgroups depending on the maximum response time, click Top 10 raidgroup by Max Utilization. The Raid Group screen displays the usage statistics of the top 10 raidgroups. If the value of the top consumer is set to 20, then the Raid Group screen displays the statistics for top 20 raidgroups. When you click a bar, the Raid Group screen displays the usage statistics of that raidgroup in the array. 88 Monitor disk arrays

89 Capacity dashboard Capacity dashboard displays the status of the arrays based on the threshold value of Pool Capacity Utilization metrics that is set in the Threshold Settings screen. The threshold value of Pool Capacity Utilization is 80 percent. The main pane displays the arrays managed by PA and the description pane displays the following capacity details of the selected array: Physical capacity distribution Drive capacity utilization Overall Pool capacity efficiency Top X Pools by Max Pool utilization The information displayed on the dashboard is based on the data from the last configuration collection. To drill down to the related screens or components, either click the heading, or the data fields. Capacity dashboard screen details Screen components Physical Total Capacity Allocated Host capacity Unallocated Volume capacity Free Available Space Reserved Pool capacity Total Description Displays the total physical capacity and does not include the logical capacity of an array. Displays the capacity of all the logical volumes that a host can access. Displays the unallocated volume capacity. Displays the space by subtracting Allocated Host Capacity and Unallocated Volume capacity from the Total Physical Capacity. Displays the capacity of the volumes that are reserved for storing Snapshot data or Thin Provisioning write data. THP Pool volume Used Capacity: Displays the total capacity that is actually used in the pool volume of Thin Provisioning THP Pool volume Unused Capacity: Displays the capacity by subtracting ThP Pool volume Used Capacity from the total capacity of the pool volumes. Other Reserved Capacity: Displays the total capacity that is a sum of the Snap Pool capacity, Journal volumes, pool capacity that is not used as pool capacity, and the capacity of the system pool VOLs management area. Displays the sum of Allocated Host capacity, Unallocated Volume capacity, Free Available space, and Reserved Pool capacity. Table Continued Capacity dashboard 89

90 Pool Capacity Efficiency Saving Used Savings Drive Capacity Utilization SAS/15000 External SAS/10000 Top 10 Pools by Max Pool Utilization Displays the efficiency of all the pools configured in the array, in terms of pool used capacity and savings by the capacity expansion techniques like FMD compression, Deduplication, and DKC compression. Displays the capacity saving in TB by the capacity expansion techniques. Displays the total used capacity in TB of a disk array Lists the percentage of pool capacity saving for each capacity expansion technique. FMD compression: Displays the pool capacity savings in terms of percentage by the FMD data compression function. DKC Comp: Displays the pool capacity savings in terms of percentage by the DKC compression saving function. Dedup: Displays the pool capacity saving in terms of percentage by the deduplication capacity saving function. Compaction ratio: Displays the pool capacity savings by compaction ratio function. This is a ratio of physical storage space which a volume consumes in a thin pool compared to its virtual size when a capacity saving enabled functions like DKC compression and deduplication are used or FMD drives with inline compression turned on are used. Displays the capacity utilization for each Raid Group type in GBs and percentage. The meter bar displays the used and available capacity for the drive. Displays the capacity that is the sum of all the logical volumes configured for SAS/15000 drive type without vitalization, divided by sum of physical volumes size of Raid Groups in SAS/15000 drive. Displays the capacity that is the sum of all the logical volumes configured for External drive type without vitalization, divided by sum of physical volumes size of Raid Groups in External drive. Displays the capacity that is the sum of all the logical volumes configured for SAS/10000 drive type without vitalization, divided by sum of physical volumes size of Raid Groups in SAS/10000 drive. Displays the top 10 Pools based on the maximum utilization for a specified duration. The pools are listed in descending order of their used capacity. Table Continued 90 Monitor disk arrays

91 Pool ID Used Capacity Usage % Saving %(FMD/Comp/Dedup) Compaction Ratio Displays the name of the pools with maximum utilization for the specified duration. Displays the maximum used capacity in GB of a pool for the specified duration. Displays the percentage of maximum used capacity of a pool for the specified duration. Displays the capacity saving percentage using FMD inline compression, dedup, and compression techniques for the specified duration. Displays the compaction ratio of the respective pool. View associated components from Capacity dashboard Procedure To view the associated components, click the following: Physical Total Capacity or any field in the section to drill down to the Summary View screen. This screen displays the summary of all the components of the selected array. Drive Capacity Utilization to drill down to Raid Groups screen. This screen displays the configuration, performance, and utilization graphs of all the drive types in the selected array. The records of the selected drive type are displayed when you click a drive type in Drive Capacity Utilization section. Pool Capacity Efficiency or any field in the section to drill down to THP/SMART pools screen. This screen displays the configuration details, and performance and utilization graphs for all the ThP/Smart Pools records. Top X Pools By Max Pool Utilization to drill down THP/SMART pools screen. This screen displays the configuration details, and performance and utilization graphs for all the ThP/Smart Pools records. The records of the selected pool are displayed when you click a Pool ID in the list in Top X Pools By Max Pool Utilization. View dashboards By default, Overview dashboard is the first screen that appears when you log in to PA for the first time. Procedure From HPE XP7 Performance Advisor, click a dashboard. Or Click a status icon or a number displaying the status of the arrays in a widget, to drill down to the appropriate dashboard. View associated components from Capacity dashboard 91

92 Manage and configure dashboards Add, remove and rearrange widgets Procedure 1. From the HPE XP7 Performance Advisor main menu, select Overview. 2. Click Actions > Add/Remove Widgets. 3. On the Add/Remove Widget dialog box, select or clear the check boxes to add or remove the widgets in the Overview dashboard. 4. Click Save. To rearrange the widgets, click and hold a widget using the mouse cursor, and move it to the required location. You can also click on the cross icon on a widget to remove the widgets. These actions can be persisted by clicking on the Save View link. Reset the widgets After adding or removing the widgets in Overview dashboard, you can reset widgets. Procedure 1. From HPE XP7 Performance Advisor, select Overview. 2. Click Actions > Reset Widgets. Edit Threshold Use this option to edit the threshold of a metrics for a component. Procedure 1. From any dashboard screen, click Actions. 2. Select Edit threshold. The Threshold Setting screen appears. Add new licenses Use this option add new licenses fo PA. 92 Manage and configure dashboards

93 Procedure To add new licenses, click Add new licenses. The Overview dashboard displays the status of PA licenses and their date of expiry. For more information, see PA Licenses. Set dashboard duration or number of top components Procedure 1. From any dashboard screen, click Actions and select Dashboard Setting. 2. On Dashboard Setting dialog box, select the duration in the Dashboard Duration and enter the number in Top X Component value. This number must be multiple of 10 and a value between 10 to Click Ok. These changes are reflected across array level dashboards. Save or dashboard statistics Use this option to generate and save the PDF or CVS file on to your management station, or to send these files in an . Procedure 1. From HPE XP7 Performance Advisor, click Actions. 2. Select either Save As or Send As for saving it on the management station or sending the statistics to an account. 3. Select the format as PDF or CVS. The file is generated. Click the file to save it on the local machine. 4. To send an , enter a valid address in the Address and click Send. The default ing application opens. 5. Attach a file in the and click Send. Set dashboard duration or number of top components 93

94 Charts About Charts You can plot the performance data for components that belong to the same or different XP and XP7 disk arrays. Graphical representation of key parameters for components is especially useful when you want to compare similar components of different XP and XP7 disk arrays to determine their performance and observe trends. You can plot performance graphs of components for different metrics that belong to the following metric categories: Metric category Description Unit of measurement Frontend IO Metrics Frontend MB Metrics Utilization Metrics Backend Metrics Response Time Metrics Provides metrics for measuring the I/Os from a host to the array. Provides metrics for measuring the throughput of the I/Os from a host to the array. Provides metrics for measuring the CPU cycles of the processors that reside on the CHAs and DKAs in the XP disk arrays, and on the MP Blades in the P9500/XP7 disk arrays. In addition, this category also provides metrics for measuring the cache and the RAID Group utilization in an array. Provides metrics for measuring the number of reads and writes on the disks on a given array, applies to RAID Groups and physical LDEVs. Provides metrics for measuring the read response time and the write response time for the read I/O requests and the write I/O requests on an array. IO/second MB/second % utilization Number of reads Number of writes Read response time Write response time The metrics that you select are component-driven, where specific set of metrics are displayed for the selected components. Associated components are displayed for selection in the Actions menu. For example, all the DKA pairs and their MPs, RAID Groups and associated physical LDEVs and pool LDEVs are grouped in the backend category. You can also analyze the performance of a component by viewing its data points collected at different collection rates in the same chart. You can compare components across the XP and the XP7 disk arrays based on the following above categories. (Ensure that you select every element that you want to appear in your chart, because PA graphs only those elements that are specified). 94 Charts

95 Charts screen elements The above image is an example of the Chart View pane in components and features screens, the image above displays the chart work area for a cache component record. Charts screen elements 95

96 S.No. Chart elements Description 1 CLPR0 In the sample image above, displays the CLPR Name of the cache record that you have selected from the master pane. Typically displays the name of the item which you have selected from the master pane. 2 Time/Date Filters Provides option to view the performance data of the selected components by date and time. The Preset option displays the following ranges of dates and time intervals: 1 hour 6 hours 12 hours 1 day 1 week Custom: Displays the option to select the duration (start and end date) from the calendar. The duration is to choose the timeline for which you want to monitor the data points. By default, the data points collected in the last one hour of the management station's time are displayed, if you do not specify a particular duration. 3 Associated tabs Displays the performance charts for the associated components. 4 Auto Update Appends the charts with latest performance collection data automatically without refreshing the screen. 5 Individual chart windows By default, each chart window is identified by the metric category for which the performance metrics of components are plotted. The Chart View pane comprises of five chart windows, each representing a specific metric category. The performance metrics of components for the same metric category are plotted in a single chart window and for different metric categories, the performance metrics of components are plotted in separate chart windows. Table Continued 96 Charts

97 S.No. Chart elements Description 6 Tool tip Displays the following details about a particular data point: The XP/XP7 disk array to which the selected component belongs Selected component name Selected metric name Selected duration Current performance value Drive type information, if the component is a RAID Group or LDEV 7 Synchronized line The green line unifies all the chart windows in Chart View. For example, if you zoom across the data points of one chart, you are simultaneously zooming the data points of all the chart windows in the Chart View. 8 Data points Displays the data points plotted in a chart. By default, only the data points that are plotted for the last one hour of the management station's time are displayed in the detail pane. 9 Threshold line The red dotted-line horizontal to the X-axis (Date and Time) indicates the threshold line/threshold value for metrics. 10 Zoom panel Displays the zoom bar to zoom in on data points for a specified threshold duration. IMPORTANT: By default, the performance graphs in the Chart View are plotted only for the last 1 hour of the management station's time. NOTE: These selections work only on the active chart windows. If the total number of data points from all the performance graphs exceeds 500 in a chart window, the data points are not rendered to optimize the charting functionality in PA. You can hover the pointing device over the line graphs to view the data points. The performance or utilization graphs for inactive components will only have the start and end data points plotted in the chart window, and connected by a straight line. For every individual component the percentile value is displayed in the tool tip. Plot charts Prerequisite Plot charts 97

98 Ensure that the performance data is collected. For more information, see About performance data collections on page 59. IMPORTANT: The components are available for selection only if they are configured on an array. As the performance data is collected for the configured components The following metrics are NOT applicable for the XP or XP7 continuous access journal pool LDEVs: Frontend IO metric category: LDEV Random Writes and LDEV Sequential Writes Frontend MB metric category: Random MB Write and Sequential MB Write Response Time metric category: Maximum Write Response and Average Write Response If you split the journal LDEVs, external RAID Group, RAID Group, ThP pool, and the snapshot 1 into two schedules, and in charts, you select Overall LDEVs, the combined data points from both the schedules are plotted on the chart. In addition, repeated time stamps are displayed if the collection frequency for both the schedules is the same. As a result, incorrect values are plotted on the graph. You can select array components that belong to the same or different XP and XP7 disk arrays, or custom groups. While selecting the components, press the Shift key for sequential selection or the Ctrl key for random selection of multiple components. You can also search for physical LDEVs that belong to an array. Each array can be identified by its Disk Controller (DKC) or model number, you can also set personalized name for arrays. DKC is the hardware component that manages front-end and back-end storage operations. The term DKC is sometimes used to refer to the entire RAID storage system. Plot charts 1. From the HPE XP7 Performance Advisor main menu, navigate to a component screen. Alternatively, you can drill down from the dashboard-level. 2. Select one or more components from the master pane for which you want to plot data. 3. To plot more metrics, navigate to Actions > Select Metrics. 4. From the Metric Category list, select a metric category. The check boxes for the default metrics are selected by default. 5. Select the check box of the metric for which you want to plot data. To add the metrics on the Chart View pane, click Ok. 1 The snapshot is a business copy volume type that depicts a point-in-time copy of the original primaryvolume. 98 Charts

99 IMPORTANT: For example, if the configuration collection is not yet performed for an array, the dashboard and the component level of screens may not display any data. For virtual volumes like the ThP and the snapshot pools, the respective component type is displayed if the selected array supports that particular component configuration. All the metrics added to the chart work area are automatically plotted in different chart windows. In the Summary View, you can view the latest data points in tabular format as well as plot the historical graphs for those metrics listed in the table. Arrays About Arrays The Arrays component type consists of the Frontend IOs, MBs of data transfer, average response time, and maximum response time of the components in an array. This screen displays the graphical representations of the components usage during the selected threshold time interval. The arrays are listed according to their health status. The critical arrays are listed first and then followed by need attention arrays. Either choose Preset interval (1hour, 6 hour,12 hours, 1 days,1 week, or last 10 collections) or customize the time interval by providing the start and end time. Array screen details Screen component Ldev Total IO- Frontend Ldev Total MB- Frontend Ldev Avg Response Time- Frontend Ldev Max Response Time- Frontend Ldev Avg Read Response Time Ldev Avg Write Response Time Ldev Max Read Response Time Ldev Max Write Response Time Description Displays the graphical representation of total Frontend IOs for the Ldevs in the array for the selected time interval. Displays the graphical representation of total Frontend throughput for Ldevs in the array for the selected time interval. Displays the graphical representation of the average response of Ldevs for the reads and writes in the array for the selected time interval. Displays the graphical representation of maximum response of Ldevs for the reads and writes in the array for the selected time interval. Displays the graphical representation of the average read response of Ldevs for the reads in the array for the selected time interval. Displays the graphical representation of the average write response of Ldevs for the writes in the array for the selected time interval. Displays the graphical representation of maximum response of Ldevs for the reads in the array for the selected time interval. Displays the graphical representation of maximum response of Ldevs for the reads in the array for the selected time interval. Arrays 99

100 Viewing other components in the array Procedure 1. From HPE XP 7 Performance Advisor, select Array. 2. To choose the duration, either select the Preset value or select Custom option. Provide the start and end time when you select the Custom option and click Apply. 3. Select an array from the main pane and right-click that array to select one of the following in the Association Links list: Ports Hostgroup Processor Cache Ldev Raidgroup Pool You can save the graphs and also send these graphs in s by using the Save As and Send As option from Actions. Ports PA provides the overall configuration, performance, and utilization details of various components of XP and the XP7 disk arrays. About Ports A port is a physical connection that allows data to pass between a host and the disk array. The number of ports on a disk array depends on the number of supported I/O slots and the number of ports available per I/O adapter. The XP and XP7 family of disk arrays supports Fibre Channel (FC) ports and other port types. Ports are named by port group and port letter, such as CL1-A. CL1 is the group; A is the port letter. The Ports component type comprises of frontend ports that are configured on XP and XP7 disk arrays. PA monitors the historical data collected, and flags the arrays as critical when one or more ports metrics cross the threshold level in the Overview dashboard. The Ports component status is based on the threshold values set for the Average Frontend IOPS and the Average Frontend MBPS metrics. You can access the Ports screen for an array using the main menu, or drill down from the Component widget in the Overview dashboard by clicking on the critical icon in the Port IO or Port Throughput donuts. This redirects you to the Component dashboard with the critical array sorted on top of the master pane. To identify the critical ports of the selected array, click the critical icon in the Frontend section in the detail pane. The Top 10 Port By IO and the Top 10 Port By Throughput graph titles depict the top 10 consumers of the port component by IO and throughput respectively. In the Ports screen, select the critical ports that are sorted on top of the master pane. Press and hold down the Ctrl key to plot historical data for multiple components. View the data plotted for the Average Port IO - Frontend and the Average Port MB Frontend metrics. If the data points for the two metrics have crossed the threshold at least once, then the array and the component is flagged as critical, and hence requires your immediate attention. 100 Viewing other components in the array

101 You can also monitor the behavior of the associated components that are LDEVs and Host groups. You can also monitor real-time data, plot trend values for all the charts, and plot forecast lines for critical metrics to predict behavior within a specified time frame. The default threshold duration is 6 hours. Use the Template feature for frequent monitoring of performance metrics, and foresee performance bottlenecks in advance. Templates can provide an at-a-glance information on the behavior of critical components and metrics. Ports screen details Screen elements Description Filters Arrays Status Port Name Displays all the arrays that PA monitors. Displays all ports by status. Displays a list of ports by name. Master pane details Status icon Port Name Port Type IOPS MBPS Detail pane > Chart View: Metrics Port Maximum IO Frontend Port Minimum IO Frontend Port Avg Port IO Frontend Port Total IO - Frontend Port Read IO - Frontend Displays the icon indicating the current status for an individual port. The Ports component status is based on the threshold values set for the Average Frontend IOPS and the Average Frontend MBPS metrics. Displays the port ID. Displays the port type. The port types include Fibre(Target), Fibre(Initiator), Fibre(External), Fibre(RCU Target), Fibre(Continuous Access Target), Fibre(Continuous Access Initiator), Fibre(External LUN Initiator), and so on. Displays the average I/Os for the selected port. Displays the average MB/s for the selected port. Default metric Yes/No Yes No Yes Yes (available for XP7 arrays from microcode version /00 onwards) No (available for XP7 arrays from microcode version /00 onwards) Table Continued Ports screen details 101

102 Screen elements Port Write IO - Frontend Port Maximum MB Frontend Port Minimum MB Frontend Port Avg MB Frontend Port Hourly Throughput Port Daily Throughput Port Weekly Throughput Port Total MB - Frontend Port Read MB - Frontend Port Write MB - Frontend Port Avg Response Time - Frontend Port Avg Read Response Time - Frontend Port Avg Write Response Time - Frontend Description No (available for XP7 arrays from microcode version /00 onwards) Yes No Yes No No No Yes (available for XP7 arrays from microcode version /00 onwards) No (available for XP7 arrays from microcode version /00 onwards) No (available for XP7 arrays from microcode version /00 onwards) Yes (available for XP7 arrays from microcode version /00 onwards) No (available for XP7 arrays from microcode version /00 onwards) No (available for XP7 arrays from microcode version /00 onwards) Associated Components Ldev Host Group Displays all the LDEVs that are associated with the selected port for the selected array. Displays the list of host groups associated with the selected port for the selected array. NOTE: For a description of all the port metrics, refer Metric Category, metrics, and descriptions on page 336. Host Groups About Host Groups A host group is a group of hosts that belong to a particular World Wide Name (WWN) Group. The world wide name group provides access for every host in the specified WWN (A unique identifier assigned to a 102 Host Groups

103 Fibre Channel device) group to a specified logical unit or group of units. This is part of the LUN Security feature. The Host Groups component type comprises of three main component types: Ports, RAID Groups, and LDEVs. You can access the Host Groups screen using the main menu, or drill down from the Performance dashboard by clicking on any of the following graph titles: Top 10 HG By IO, Top 10 HG By Throughput, Top 10 HG By Avg Response Time, Top 10 HG By Max Response Time. The Host Groups component status is based on the following corresponding metrics: Host Groups IOPS, Host Groups MBPS, Host Groups Average Read Response Time, and Host Groups Average Write Response Time for the threshold values are set in the Threshold Settings screen. You can also hover over the individual bars in the graphs in the Performance dashboard for more information. To filter and view any one of the top 10 HGs consumers, click on the individual bar, as required. Click on a host group from the master pane, and navigate to the associated tabs to view the Ports and the Ldevs that are associated with the selected host group. You can also view these associated components with an individual host group by right-clicking a host group and selecting Ldevs or Ports from the Association Links list. If you select Ldevs, the Ldev screen appears. By default, the master pane of the displays all LDEV types such as THP, SNAP, CA/ CAJ, JNL Volumes, and BC. If you want to view only THP LDEVs, then from the Ldev Types list in the filter pane, select THP, and click Apply. The RG column displays the individual Raid Group associated with the LDEVs. If you have selected a pool LDEV (THP), the RG name displays as the LDEV type as THP along with the pool ID. For example, THP-PID(99). Use the associated tabs of the association links to further monitor the components that are associated with the selected one or more LDEVs. For example, if you want to monitor the pool utilization of a THP LDEV, from Association Links > Pool, in the detail pane you can view the Pool Utilization chart. You can also click the Pool associated tab from the Ldev screen. In the above mentioned selections, the LDEVs are displayed in the descending order, where the maximum utilized LDEV is displayed first followed by the subsequent, and least utilized LDEVs. You can also monitor real-time data, plot trend values for all the charts, and plot forecast lines for critical metrics to predict behavior within a specified time frame. The default time duration is 6 hours. NOTE: The ports and LDEVs can be associated with multiple hosts in a host group. Use the Template feature for frequent monitoring of performance metrics, and foresee performance bottlenecks in advance. Templates can provide an at-a-glance information on the behavior of critical components and metrics. Host Groups screen details Screen elements Description Filters Arrays Status Port Name Host Group Displays all the arrays that PA monitors Displays all the host groups that are configured in the selected array. Displays a list of ports associated with the selected host group. Displays all the host groups configured in the array. You can filter and view an individual host group using the drop-down list. Master pane details Table Continued Host Groups screen details 103

104 Screen elements Status icon Host Group Port Total IOPS Total MBPS Detail pane > Chart View: Metrics Host Group Total IO Frontend Host Group Total IO Writes- Frontend Host Group Total IO Reads- Frontend Host Group Total IO Miss - Frontend Host Group Total MB - Frontend Host Group Total MB Writes - Frontend Host Group Total MB Reads - Frontend Host Group Avg Write Response Time Host Group Maximum Write Response Time Host Group Avg Read Response Time Host Group Maximum Read Response Time Host Group Read MB Ratio - Frontend Description Displays the icon indicating the current status of a host group. The status is based on the threshold values of the following metrics: Host Group IOPS, Host Group MBPS, Host Group Avg Read Response Time, and Host Group Avg Write Response Time. Displays the host groups which is a user-defined group on an array. By default, all host groups are displayed. Displays the associated frontend ports for the selected host group. Displays the aggregate I/Os from each host group. Displays the total frontend throughput in MB/s on each host group. Default metric Yes/No Yes No No No Yes No No Yes No Yes No No Table Continued 104 Charts

105 Screen elements Host Group Read IO Ratio - Frontend Description No Detail pane > Associated Components Port Ldev Displays all the ports that are associated with an individual host group. Displays all the LDEVs that are associated with a host group. NOTE: Refer Metric Category, metrics, and descriptions on page 336 for a list of all the Host Groups metrics and its description. Host View About Host View Host view is an aggregated view of all the LDEVs configured for the host. It provides the performance metric statistics at host level. Host View is currently based on Host groups. Host component type comprises of two main components type: Host Groups and LDEVs. You can view the Host View screen, by clicking the Host View from the main menu. The host components status is based on the following host group metrics for the threshold values set in the Threshold Setting screen: Host Group IOPS Host Group MBPS Host Group Avg Read Response Time Host Group Avg Write Response Time Click a host from the master pane, and navigate to the associated tabs for viewing the Host Groups, Ldevs, and Ports that are associated with the selected host group. Use the Template feature for frequent monitoring of performance metrics, and foresee performance bottlenecks in advance. Templates can provide an at-a-glance information on the behavior of critical components and metrics. Host View screen details Screen elements Description Filters Arrays Status Host Group Displays all the arrays that PA monitors Displays all the host groups that are configured in the selected array. Displays all the host groups configured in the array. You can filter and view an individual host group using the drop-down list. Master pane details Table Continued Host View 105

106 Screen elements Status icon Host Group Total IOPS Total MBPS Detail pane > Chart View: Metrics Host Total IO Frontend Host Total IO Writes- Frontend Host Total IO Reads- Frontend Host Total MB - Frontend Host Total MB Writes - Frontend Host Total MB Reads - Frontend Host Avg Write Response Time - Frontend Host Maximum Write Response Time - Frontend Host Avg Read Response Time - Frontend Maximum Read Response Time - Frontend Description Displays the icon indicating the current status of a host group. The status is based on the threshold values of the following metrics: Host View IOPS, Host View MBPS, Host View Avg Read Response Time, and Host View Avg Write Response Time. Displays the host groups which is a user-defined group on an array. By default, all host groups are displayed. Displays the aggregate I/Os from each host group. Displays the total frontend throughput in MB/s on each host group. Default metric Yes/No Yes No No Yes No No Yes No Yes No Detail pane > Associated Components Port Ldev Host Group Displays all the ports that are associated with an individual host group. Displays all the LDEVs that are associated with a host group. Displays all the associated Host Groups. Processors 106 Processors

107 About Processors IMPORTANT: This section is applicable for the P9500, the XP 24000, and the XP7 disk arrays. The MP blades are the microprocessor blades in the P9500/XP7 disk arrays. Each MP blade has four Multiprocessor (MPs) residing on it. The MPs/DKPs that reside on the CHAs and the Disk Processors (DKPs) in the XP disk arrays form part of the MP blades in the P9500/XP7 disk arrays. DKPs does not exist in the P9500/XP7 disk arrays. All the MPs/DKPs form part of the MP blades in the P9500/XP7 disk arrays. The Processors component screen displays all the processors that are configured on a specified array. You can access the Processors screen using the main menu, or drill down from the Overview dashboard. The status of a Processor is based on the MP Blade Utilization metric. If the MP Blade Utilization for any of the Processors exceeds the defined threshold limit during the specified threshold duration, then the array as well as the component is flagged as critical in the Overview and the Component levels of dashboards. For example, if the usage of a Processor for the MP Blade Utilization metric exceeded the defined threshold at least once, then that array on which the processor is configured is flagged as critical in the Processor Utilization section in the Component widget in the Overview dashboard. In order to determine to cause of this bottleneck, click the critical icon. The Component dashboard is displayed with all the critical arrays sorted on top of the master pane. In the detail pane, click the critical status icon under the Processor component. Alternatively, click the graph title Top 10 Processors By Utilization under Processor to redirect you to the display the top Processor consumers in the Processors screen master pane. You can also click on an individual bar to filter and display only that processor. The height of each bar represents the value. In the Processors screen, see if the historical data for the Average MP Blade Util metric has gone above the defined threshold value. If the usage exceeds the default value at least once, then it requires your immediate attention. You can also monitor real-time data, plot trend values for all the charts, and plot forecast lines for critical metrics to predict behavior within a specified time frame. The default threshold duration is 6 hours. Use the Template feature for frequent monitoring of performance metrics, and foresee performance issues in advance. Templates can provide an at-a-glance information on the behavior of critical components and metrics. Processors screen details Screen elements Description Filters Arrays Status MP Blades Name Displays all the arrays that PA monitors. Displays Processors of all statuses by default. Displays all the Processors configured on the selected array. Master pane details Status icon Flags component as Critical, Warning, OK, or Unknown. Critical statuses are by default displayed on top of the component list. The status for an individual Processor is computed based on the threshold value of the MP Blade Utilization metric. Table Continued About Processors 107

108 Screen elements MPB/CHA/DKA ID MPB/CHA/DKA Name MPB/CHA/DKA Util Detail pane > Chart View: Metrics pertaining to MP Blades on XP7/ P9500 arrays MP Blade Avg Util Average MP Blade IO Buffer Count MPB CLPR Usage Util MPB CLPR Writes Pending Util MP Blade MPB-1MA Utilization Processing Types MP Blade Processor Util MP IO Buffer Count Detail pane > Chart View: Metrics Description Displays the MP Blade, CHA or DKA ID. Displays the MP Blade, CHA or DKA name. MPB/CHA/DKA Util (Average Utilization) indicates the average Processors utilization by each processing type. Default metric Yes/No Yes Yes Yes Yes Yes Yes Yes Default metric Yes/No Metrics available for XP24000 CHIP Util Total CHIP Util MP0 CHIP Util MP1 CHIP Util MP2 CHIP Util MP3 ACP Pair Utilization MP0 Utilization MP1 Utilization MP2 Utilization MP3 Utilization Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Table Continued 108 Charts

109 Screen elements ACP Pair Sequential Read Tracks - Backend ACP Pair Non-Sequential Read Tracks - Backend ACP Pair Write Tracks - Backend ACP Pair Total Tracks - Backend ACP Total I0 - Frontend ACP Total Random IO - Frontend ACP Random Reads - Frontend ACP Random Read Cache Hits - Frontend ACP Random Writes - Frontend ACP Total Sequential IO - Frontend ACP Sequential Reads - Frontend ACP Sequential Read Cache Hits - Frontend ACP Sequential Writes - Frontend ACP Total MB - Frontend ACP Total Random MB - Frontend ACP Random Read MB - Frontend ACP Random Write MB - Frontend ACP Total Sequential MB - Frontend ACP Sequential Read MB - Frontend ACP Sequential Write MB - Frontend Description No No No No No No No No No No No No No No No No No No No No Metrics available for XP/P9500/XP7 arrays ACP Util Total ACP Util MP0 Yes Yes Table Continued Charts 109

110 Screen elements ACP Util MP1 ACP Util MP2 ACP Util MP3 Description Yes Yes Yes Associated components Cache LDEVs Redirects to the Cache screen displaying the CLPR ID assigned to the selected Processor. Displays all the LDEVs that are associated with the selected processor. NOTE: Refer Metric Category, metrics, and descriptions on page 336 for the list of all Processors metrics and its description. View top 20 consumers of an MP blade IMPORTANT: This section is applicable for P9500/XP7 disk arrays. The top 20 consumers can be LDEVs, continuous access journal groups, or the E-LUNs (external volumes) that are assigned to an MP blade. The top 20 consumers count is derived based on each consumer's average utilization of the CPU cycles across the collection cycles during the selected duration. Those consumers whose average MP Blade utilization percentage is high when compared to the other associated consumers are categorized as the top 20 consumers. Prerequisites The performance data collection is complete for the Processors. Procedure 1. From the Processors screen, select the Processor, as desired. 2. To view the top 20 consumers, navigate to Actions > MP Blade Util-Top 20 Consumers. A column graph is displayed in a separate Utilization Metrics chart window, where each column depicts the average MP blade utilization by each consumer. The X-axis represents the consumer IDs (includes the consumer type and the processing type) and the Y-axis represents the average MP blade utilization. The chart title includes the array DKC number, the MP blade ID, and the metric name for which the graph is plotted. 110 View top 20 consumers of an MP blade

111 Figure 3: Top 20 consumers of an MP Blade 3. Place the pointer over a column to view the following details for the respective consumer: Charts 111

112 MP blade utilization by top 20 consumers Array DKC, Consumer type, Consumer ID, and the processing type Example 1055, LDEV: 1: 95 (Backend) A consumer's association with a processing type provides an understanding on the number of processing cycles used by the consumer with different processing types. For example, an LDEV 0:09 may be involved in processing frontend and backend requests. Its processing type reveals whether the frontend or the backend requests have been high. Further to the above-mentioned example: If the average utilization shown for the Frontend processing type is 30% and 25% is contributed by the LDEV 1:95 If the average utilization shown for the Backend processing type is 25% and 15% is contributed by the LDEV 1:95 This data indicates that the LDEV 1:95 has been addressing high frontend requests when compared to the backend activities. NOTE: The average utilization of an MP Blade is always on a scale of 1-100%. Percentage of MP blade utilization by a consumer 1.84% Indicates that 1.84% is the average MP blade utilization by the consumer 1:95 that belongs to the LDEV category and associated with the Backend processing type. It also indicates that out of the average MP blade utilization displayed for the Backend processing type, 1.84% utilization is due to the consumer 1:95. NOTE: A consumer's percentage of MP blade utilization may or may not be constant for the entire duration that you select. For example, it can be the utilization at a particular time stamp that is the maximum compared to the utilization by other components during the selected duration. View MP Blade Utilization by processing types IMPORTANT: This section is applicable for P9500/XP7 disk arrays. You can view the average MP blade utilization split up for the different processing types. For example, if the average MP blade utilization by a Frontend processing type is significantly higher than the other processing types, it indicates that more CPU cycles are utilized to process requests from the Frontend consumers. Similarly, if the average utilization by a Backend processing type is high, it indicates that more CPU cycles are utilized to process backend transfers for the Backend consumers. The following table describes the different processing types. 112 View MP Blade Utilization by processing types

113 Processing types Open-target Open-initiator Open-external initiator Backend System Description Indicates all the frontend activities involved in processing the I/O requests. Indicates all the processing involved in the continuous access replication activities. Indicates all the processing involved in accessing external storage. Indicates all the backend activities involved in processing target I/O requests. Indicates all the array system activities involved to service all the above-mentioned processing type requests. Procedure 1. Scroll down the detail pane in the Processors screen to view the MP Blade utilization by processing types. A stacked area graph is displayed in a separate Chart View window and displays the MP blade utilization split up for the different processing types. Each area represents the percentage of average MP blade utilization by an individual processing type. The X-axis represents the duration that you select and the Y-axis represents the average MP blade utilization. The chart title includes the P9500/XP7 disk array DKC number, the MP blade ID, and the metric name for which the graph is plotted. The legends on the top right corner of the stacked area graph help you to identify the corresponding processing types. 2. Place the pointer over an area to view the following details for a processing type: MP Blade utilization by processing types Processing type Example (see above figure) The MP Blade utilization by all the process types are displayed by default. Date and time stamp :58:00 Average MP blade utilization by a processing type (average from the previous to the current time stamp) 9.82%, 0.09%, 8.60%, and 2.56% respectively by Open-target, Open-external, Backend, and System processing types. Charts 113

114 NOTE: If you want to graph only a single processing type, check and deselect the legend for the rest of the processing types. The above data helps you to understand the extent of an MP blade utilization by a particular processing type when compared to the average utilization by all the processing types for the overall duration. Cache About Cache A Cache is a high speed memory that is used to speed up the I/O transaction time. All reads and writes to the XP and P9500 disk arrays are sent to the cache. The data is buffered in the cache until it is transferred to the physical disks or from the physical disks (with slower data throughput) is complete. The benefit of cache memory is that it speeds the I/Os throughput to the application. The larger the cache size, the greater the amount of data buffering that can occur and the greater throughput to the applications. In the event of power loss, the battery power maintains the contents of cache for a specified time period. The cache function optimizes the process of XP7, P9500, and XP disk arrays. The logical partitioning of the cache is performed at a storage-level by dividing the cache into multiple CLPRs to reduce I/O workload. However, performance issues can still occur. PA performs detailed monitoring of the cache component for various performance and utilization capacities of cache, and provides historical and realtime plotting of the collected data. You can access the Cache screen using the main menu, or drill down from the Overview dashboard. The status of cache is based on the default value that you have set for the Cache Usage and Cache Write Pending metrics. If at any point the data points for the metrics exceeds the defined threshold limit for the specified threshold duration, then the array as well as the component is flagged as critical in the Overview and the Component levels of dashboards. For example, if the data points for the Cache Usage or Cache Write Pending metrics exceeds the defined threshold value at least once, then the array as well as the cache is flagged as critical in the Cache Write Pending section in the Component widget in the Overview dashboard. You can determine the cause of this performance bottleneck by clicking on the critical icon. The Component dashboard is displayed with all the arrays that PA monitors in the master pane. In the Cache section in the detail pane, click the critical icon. Alternatively, click the title of the bar charts Cache Write Pending by CLPRs and the Cache Size and Avg Cache Usage by CLPRs. You can also click on an individual bar in the bar chart depicting a critical component. The height of each bar depicts certain metrics. For example, the pair of bars in Cache Write Pending by CLPRs displays the Avg and Max Write Pending details of the critical cache record. The Cache screen displays the critical components sorted on top of the master pane. From the master pane, select one or more cache records for which you want to plot data. By default, the Chart View area in the detail pane displays the historical data for the last one hour. If the Cache Writes Pending Util or the Cache Usage Util metrics have crossed the default value that you have set, it indicates of usage and performance issues in the cache component. The associated array components for cache are RAID Groups and Processors. You can also monitor the performance of the associated components using the associated component tabs. To eliminate performance issues in advance, use templates that provide an at-a-glance information on the behavior of critical components and metrics. You can also monitor real-time data, plot trend values for all the charts, and plot forecast lines for critical metrics to predict behavior within a specified time frame. 114 Cache

115 Cache screen details Screen elements Description Filters Arrays Status Displays all the arrays that PA monitors. Displays cache records by status. Master pane details Status icon CLPR ID CLPR Name Write Pending Utilization Detail pane > Chart View: Metrics Cache Writes Pending Utilization Cache Usage Utilization Cache Sidefile Usage Utilization CLPR MPB Writes Pending Utilization CLPR MPB Usage Utilization Cache Usage MB Cache Writes Pending MB Cache Sidefile 1 Usage MB CLPR Read Hits Displays the icon indicating the current status for cache component type. The status for a cache component is determined based on the threshold value set for Cache Status icon Usage Utilization and Cache Write Pending Utilization metrics. Displays the cache logical partition number. Displays the CLPR name. Displays the write pending percentage of the cache component. Default metric Yes/No Yes Yes No Yes Yes No No No Yes Detail pane > Associated Components RAID Group Processor Redirects to the RAID Groups screen for the selected array. Redirects to the Processors screen for the selected array. 1 Sidefile is an area of cache used to store the data sequence number, record location, record length, and queued control information. Cache screen details 115

116 LDEVs For a description of these metrics, see Metric Category, metrics, and descriptions. About LDEVs Logical device( LDEV) is created when a RAID group is carved into pieces according to the selected host emulation mode. The term LDEV is also known as term volume. The LDEVs associated with each RAID group are assigned an emulation mode that makes them operate like OPEN system disk drives. The emulation mode determines the size of an LDEV (OPEN-3: 2.46 GB, OPEN-8: 7.38 GB, OPEN-9: 7.42 GB, OPEN-E: GB, OPEN-K: Not available, OPEN-L: 36 GB, OPEN-M Not available, OPEN-V: User-defined custom size). The number of resulting LDEVs depends on the selected emulation mode. PA enables you to monitor the performance of LDEVs for XP and XP7 disk arrays. For example, if you are dealing with high response time in an XP7 disk array, PA enables you to investigate the performance of its LDEVs. You can access the LDEVs screen using the main menu or drill-down from the Performance widget in the Overview dashboard. In the Performance widget, the following LDEV parameters are displayed: LDEV Total Frontend IO, LDEV Total Frontend Throughput, and LDEV Response Time. Monitor if any of the sections has arrays flagged as critical. You can hover over the Status of each LDEV parameters to view the metrics that determines the health of the LDEVs of the array. For example, if the LDEV Response Time section in the Overview dashboard has flagged certain arrays as critical, hover over the Read Response Time status to see the metrics that determine the health of LDEVs. The tool tip displays as Status is computed based on LDEV Avg read response time. Click on the critical icon. The Performance Dashboard displays a list of arrays in critical status sorted on top of the master pane. Select the array that you want to monitor. To identify the LDEVs experiencing slow response time, navigate to the Top 10 LDEV By Max Response Time graph, and perform either of the following: 1. Click an individual bar in the graph depicting a critical LDEV. The height of each bar represents the value. The Ldevs screen is filtered to display only the LDEV that you selected from the bar chart. 2. Click on the graph title Top 10 LDEV By Max Response Time. The Ldevs screen is displayed listing the top 10 LDEVs on top of the master pane. 3. Select a critical LDEV, and navigate to the Average Read Response metric in the detail pane. Monitor if the values are more than the set threshold level. If the data points have exceeded above the set threshold value, then this requires your immediate attention. You can create a template and add the metric to it for frequent monitoring of data. NOTE: For effective monitoring, you can also append the IOPS and MBPS metrics to the template. 4. To check if the associated components also have performance issues, click the required associated component tab. You can add metrics of associated components as well as part of the template you created. To eliminate performance issues in advance, templates can provide an at-a-glance information on the behavior of critical LDEV components and its metrics. You can also monitor real-time data, plot trend values for all the charts, and plot forecast lines for metrics to predict behavior for a specified time period. Search for an LDEV You can filter and search for an LDEV in the LDEV text box in the filter pane below the banner. Provide the LDEV ID, for example, 01:94 in the text box, and click Apply. Click Reset to view all the LDEVs in the master pane. 116 LDEVs

117 LDEVs screen details Screen elements Description Filters Arrays Status Port Name RAID Group Host Group MP Blades Name Journal Name LDEV Types Displays all the arrays that PA monitors. Displays LDEVs of all statuses. Displays all the ports configured on the array. Displays all the RGs configured on the array. Displays all the host groups configured on the selected array. Displays all the processors for the selected array. Displays all the journals configured on the selected array. Displays all the types of LDEVs. Master pane details Status icon LDEV Displays the status icon for an individual LDEV. The status for an LDEV is determined based on the threshold values set on the Average Read Response Status icon Time and the Average Write Rest Times metrics. To view, configure or edit threshold settings for LDEV metrics, see Threshold settings on page 174. Displays the identification number of the logical device. If the capacity saving functions such as, compression and dedupe, are enabled for that LDEV that is configured in a thin pool, the following appear along with the Ldev identification number : C - Enable: Indicated that DKC compression is enabled on the volume ( V-vol). C & D- Enabled (Post- Process): Indicates that compression and deduplication is enabled with post processing on the volume. Rehydrating: Indicated that capacity saving function is disabled on the volume. RG Port Name Hostgroup Displays the RAID Group to which the LDEV belongs. Displays the corresponding port for the selected LDEV. Displays the corresponding host group for the selected LDEV. Table Continued LDEVs screen details 117

118 Screen elements MPB Name JID Avg Read/Write Resp Detail pane > Chart View: Metrics LDEV Total IO - Frontend Total IO Reads - Frontend No Total IO Writes - Frontend No LDEV Total Random IO - Frontend LDEV Random Reads - Frontend LDEV Random Read Cache Hits - Frontend LDEV Random Writes - Frontend LDEV Total Sequential IO - Frontend LDEV Sequential Reads - Frontend LDEV Sequential Read Cache Hits - Frontend LDEV Sequential Writes - Frontend LDEV Total IO Miss - Frontend LDEV Total MB - Frontend LDEV Total MB Reads - Frontend LDEV Total MB Writes - Frontend LDEV Total Random MB - Frontend LDEV Random MB Read - Frontend LDEV Random MB Write - Frontend LDEV Total Sequential MB - Frontend Description Displays the name of the corresponding processor to which the LDEV is assigned. (applicable to only XP7 and P9500 arrays). Journal Group ID to which an LDEV is assigned. The average latency for reads or writes on the LDEV during the specified threshold duration. Default metric Yes/No Yes No No No No No No No No No No No Yes No No No No No No Table Continued 118 Charts

119 Screen elements LDEV Sequential MB Read - Frontend LDEV Sequential MB Write - Frontend LDEV Sequential Read Tracks - Backend LDEV Non-sequential Read Tracks - Backend LDEV Write Tracks - Backend LDEV Total Tracks - Backend LDEV Avg Read Response LDEV Maximum Read Response LDEV Avg Write Response LDEV Maximum Write Response Description No No No No No Yes Yes No Yes No V-Vol Tier Capacity Distribution Detail pane > Associated Components No RAID Group Host Group Port Processor Displays the RAID Group to which the selected LDEV record is assigned. Displays the host groups to which the selected LDEV is assigned. Displays the port that is associated with the selected LDEV. Displays the processor associated with the selected LDEV. NOTE: 1. Components listed in the Actions menu are specific to LDEVs that you select from the master pane. 2. Refer Metric Category, metrics, and descriptions on page 336 for list of all metrics per component and description. RAID Groups About RAID Groups RAID Groups 119

120 Redundant array of independent disks (RAID) is a disk array in which part of the physical storage space is used to store user data and parity information, and another part is used to store a duplicate set of user data and parity information. This redundant configuration prevents data loss in case a disk drive within the RAID configuration fails, and enables regeneration of user data in the event that one of the array's member disks or the access path to it fails. RAID Group is a set of RAID disks that have the same capacity and are treated as one group for data storage and recovery. A RAID group contains both user data and parity information. This allows user data to be accessed in the event that one or more of the drives within the RAID group are not available. The RAID level of a RAID group determines the number of data drives and parity drives and how the data is striped across the drives. For RAID1, user data is duplicated within the RAID group, so there is no parity data for RAID1 RAID groups. RAID levels include RAID0, RAID1, RAID2, RAID3, RAID4, RAID5 and RAID6. A RAID group can also be called an array group or a parity group. The RAID Groups component type comprises of the individual RAID Groups that are configured on XP and XP7 disk arrays. The main menu is the primary navigation to access the RAID Groups component screen. The Component widget in the Overview dashboard provides at-a-glance status of the RG Utilization for all the arrays that PA monitors. To view the metrics that determine the status of RG Utilization, hover over Status. It displays as Status is computed based on Backend RG Sequential Reads, Backend RG Nonsequential Reads, Backend RG Writes, Backend RG Util. The bar chart below provides the average and maximum RAID Group utilization details for each array. Clicking on any of the bars will redirect you to the RAID Groups screen. You can also click on the critical status to display the Component dashboard with a list of critical arrays listed on top of the master pane. Click the critical icon in the Backend section to navigate to the RAID Groups screen. You can also click on the graph titles Top 10 Internal RG By Avg Utilization and Top 10 Internal RG By Max Utilization to view the top 10 RG consumers by average and maximum utilization respectively. Click an individual bar in the graph to filter and display only that RAID Group. In the RAID Group screen, monitor the utilization data of the metrics, and see if any of the metrics that determines the status of RG component has crossed threshold value. If one of the metrics has crossed the threshold value at least once during the threshold duration, then that array is flagged as critical in the Overview dashboard along with the component. The default threshold duration is 6 hours. To eliminate performance issues in advance, templates can provide an at-a-glance information on the behavior of critical components and metrics. You can also monitor real-time data, plot trend values for all the charts, and plot forecast lines for critical metrics to predict behavior within a specified time frame. The associated array components for RAID Groups are cache and LDEVs. To view the caches and LDEVs that are associated with a RAID Group in the same screen, click the respective associated component tab. You can filter RAID Group records by array type, status, RAID Group level, RAID Group Name, and Drive Type. Use the Template feature for frequent monitoring of performance metrics, and foresee performance bottlenecks in advance. Templates can provide an at-a-glance information on the behavior of critical components and metrics. RAID Groups screen details Screen elements Description Filters Arrays Status Displays all the arrays that PA monitors. Displays the status of an individual RG component. Table Continued 120 RAID Groups screen details

121 Screen elements RAID Group RAID level Drive Type Description Displays all the RAID Groups configured in the selected array. Displays all the RAID level in an array. The XP7 array supports the following RAID levels: RAID1, RAID5, RAID6. Displays all the supported drive types. Master pane details Status icon RG RAID Level Drive Type Utilization Detail pane > Chart View: Metrics RAID Group Overall Utilization RAID Group Utilization Random Reads RAID Group Utilization Random Writes RAID Group Utilization Random Write Parity RAID Group Utilization Seq Reads RAID Group Utilization Seq Writes RAID Group Utilization Seq Write Parity RAID Group Total Tracks - Backend RAID Group Sequential Read Tracks - Backend RAID Group Non-sequential Read Tracks - Backend Displays the icon indicating the current status for the selected RAID Group. The status for a RAID Group is determined based on the Overall Raidgroup Utilization metric. The RAID Group to which the host belongs. The corresponding RAID level for a RAID Group. You cannot select a RAID level if the drive type is External Storage. The corresponding drive type for a RAID Group. The total utilization of the RAID Group. Default metric Yes/No Yes No No No No No No Yes No No Table Continued Charts 121

122 Screen elements RAID Group Write Tracks - Backend RAID Group Total IO - Frontend RAID Group Total Random IO - Frontend RAID Group Random Reads - Frontend RAID Group Random Read Cache Hits - Frontend RAID Group Random Writes - Frontend RAID Group Total Sequential IO - Frontend RAID Group Sequential Reads - Frontend RAID Group Sequential Read Cache Hits - Frontend RAID Group Sequential Writes - Frontend RAID Group Total MB - Frontend RAID Group Total Random MB - Frontend RAID Group Random Read MB - Frontend RAID Group Random Write MB - Frontend RAID Group Total Sequential MB - Frontend RAID Group Sequential Read MB - Frontend RAID Group Sequential Write MB - Frontend Description No Yes No No No No No No No No Yes No No No No No No Detail pane > Associated Components Table Continued 122 Charts

123 Screen elements Ldev Cache Description Displays all the LDEVs that are associated with the selected RAID Groups. Redirects to the cache screen for the selected array. NOTE: Refer Metric Category, metrics, and descriptions on page 336 for list of all metrics per component and description. Thin Provisioning and Smart Pools About Thin Provisioning and Smart Pools Thin Provisioning is a volume management feature that is implemented by creating one or more Thin Provisioning pools (THP pools) of physical storage space using multiple LDEVs. Then, you can establish virtual THP volumes (THP V-VOLs) and connect them to the individual THP pools. In this way, capacity to support data can be randomly assigned on demand. Pool is a set of volumes that are reserved for storing Snapshot data or Thin Provisioning write data. Pool volume is a logical volume that is reserved for storing snapshot data for Snapshot operations or write data for Thin Provisioning. Thin provisioning is a software feature to manage storage that maximizes physical usable capacity of disk arrays. It is implemented by creating one or more Thin Provisioning pools (THP pools) of physical storage space using multiple LDEVs. Then, you can establish virtual THP volumes (THP V-VOLs) and connect them to the individual THP pools. In this way, capacity to support data can be randomly assigned on demand. Smart Pools is also a feature in the XP7 disk array using which you can configure a storage system with multiple storage tiers. This support allows you to allocate data areas with heavy I/O loads to higher-speed media and to allocate data areas with low I/O loads to lower-speed media. In this way, you can make the best use of the capabilities of installed storage media. Up to three storage tiers consisting of different types of data drives are supported in a single pool of storage. PA enables you to monitor the performance of the ThP and Smart Pools installed in the XP and the XP7 disk arrays. The status of the ThP and Pools component is based on the threshold value that you have set for the Pool Avg Read Response Time and Pool Avg Write Response Time metrics. Select a smart pool from the main pane, and click Actions > Average IOPH per Capacity to plot for the array. On the graph, the x-axis indicates Capacity in GB, and the y-axis indicates the average number of IO per hour. Click on the Smart Pool ID to add or remove the record from the graph. Hover over a node on the individual line in the graph for more information. Use the Template feature for frequent monitoring of performance metrics, and foresee performance bottlenecks in advance. Templates can provide an at-a-glance information on the behavior of critical components and metrics. Thin Provisioning and Smart Pools screen details Screen elements Description Filters Arrays Displays all the arrays that PA monitors. Table Continued Thin Provisioning and Smart Pools 123

124 Screen elements Status Pool Name Description Displays all statuses by default. Displays all the pools assigned for the selected array. Master pane details Status icon Pool ID Pool Type The status of a ThP or Pool record is computed based on the Pools Average Read Response Time and Pool Average Write Response Time metrics. Displays the pool number. Displays how the pool is being used. For Thin Provisioning, THP appears. For Smart Pools: Smart appears. NOTE: Real time tier enabled Smart pool is displayed as Smart (Real time tier). Pool Status Displays the pool status. Normal: The pool is in a normal status. Over threshold: The used capacity of the pool exceeds the pool threshold value that you have set for the pool metrics. Blocked: The pool is full, or an error occurred in the pool, therefore the pool is blocked. Failure: The Smart or the ThP pool is in a failed state. The VVols performance data, the respective RAID Groups, and the pool LDEVs utilization data is not displayed for such pools. Savings%(FMD Gen2/Comp/Dedup) Average Read/Write Response Time Detail pane > Chart View: Metrics Pool Utilization Pool Frontend Vs Backend Hit Ratio Pool Total IO - Frontend Pool Total Random IO - Frontend Displays the savings (%), deduplication, and compression ratio for the V05+1 feature in FMD Gen2 drives. Displays the average of average read response time and average write response time of the Pool Virtual Volumes. Default metric Yes/No Yes No Yes No Table Continued 124 Charts

125 Screen elements Pool Total Random Read - Frontend Pool Total Random Read Cache Hits - Frontend Pool Total Random Write - Frontend Pool Total Sequential IO - Frontend Pool Total Sequential Reads - Frontend Pool Total Sequential Reads Cache Hits - Frontend Pool Total Sequential Writes - Frontend Pool Total MB - Frontend Pool Total Random MB - Frontend Pool Total Random Read MB - Frontend Pool Total Random Write MB - Frontend Pool Total Sequential MB - Frontend Pool Total Sequential Reads MB - Frontend Pool Total Sequential Write MB - Frontend THP Pool Backend Tracks - Backend Pool Backend Tracks - Backend Pool Max Read Response Time Pool Max Write Response Time Pool Avg Read Response Time Pool Avg Write Response Time Description No No No No No No No Yes No No No No No No No No No No Yes Yes Detail pane > Associated Components Table Continued Charts 125

126 Screen elements Average IOPH per Capacity Vvol RAID group Description Displays the average IOPH against pool capacity metric displays data for all the monitoring cycles available for the duration selected by the user. Displays the corresponding number of V-VOLs associated with the selected pool. Displays the RAID Group records associated with the selected Pool. Tiers Displays the Tiers associated with the selected Pool. For a description of the Pool metrics, see Metric Category, metrics, and descriptions. Continuous Access About Continuous Access The Continuous Access screen provides the configuration and performance details on the pair status of the primary and the secondary systems. To view performance and utilization details of Continuos Access data in the P9500/XP/XP7 disk arrays, click Continuous Access from the main menu. You can also navigate to the CA screen from the Overview and the Continuous Access dashboards. The Continuos Access widget in the Overview dashboard provides at-a-glance data about the health of the Continuous Access volumes in the arrays that you manage. For example, the numbers in CA Pair Status in Overview > Continuous Access depict arrays. If the critical status is shown as 2, it means pair status in two arrays are in critical status. Click the critical icon to navigate to the Continuous Access dashboard. Select the array from the master pane, and in under Status > CA Pair Status, click the critical icon. The Continuous Access screen displays all the critical components with the status on the tool tip as Pair Suspend Error. This requires the immediate attention of the administrator if the critical volumes are active volumes that carry out regular data transfer. To view the host port of the selected CA, click the Host Port tab. Similarly, you can also monitor the associated CA port. Use the Template feature for frequent monitoring of performance metrics, and foresee performance issues in advance. Templates can provide an at-a-glance information on the behavior of critical components and metrics. You can also monitor real-time data, plot trend values for all the charts, and plot forecast lines for critical metrics to predict behavior within a specified time frame. Continuous Access screen details Screen elements Description Filters Arrays Pair Status Displays all the arrays that PA monitors. Displays the records which have the CA Pair Status as Ok, Warning, Critical or Unknown. Table Continued 126 Continuous Access

127 Screen elements CA Link Status Description Displays the CA link of all statuses by default. The status displays as critical if the CA Link is down. Master pane details Pair Status Current replication link status of the P-VOL or S-VOL on the selected array. The replication link status shown is corresponding to the continuous access transactions happening on the selected array, and can be one of the following: SMPL COPY PAIRED Pair Suspendx Pair Suspend Error Pair DUB Reverse Copy Pair SideFile 30% over Pair SideFile over Suspend SVOL Swap ready Suspend Unknown Primary DKC PVOL PVOL Host Port Secondary DKC SVOL SVOL Host Port CA Link Status Serial number of the primary disk array (primary data center). LDEV configured as P-VOL on the primary data center. Displays the LDEV number in cu:ldev format. Host port assigned for the P-VOL. Serial number of the secondary array. LDEV configured as S-VOL on the secondary data center. Displays the LDEV number in cu:ldev format. Host port assigned for the S-VOL. Failed or Active. NOTE: When Continuous access is configured as Sync or Async and the selected volume type is SVOL, then you might encounter the CA Link status as NA - Not Applicable. Table Continued Charts 127

128 Screen elements Description JID Journal ID of the journal group associated with the P-VOL or S- VOL. The value in the column includes the journal ID and mirror ID on which the CA-J pair is created. Each pair relationship between journals is called a "mirror". Example: jnl ID:mirror ID Detail pane > Chart View: Metrics RPO Ldev IOPS - Frontend Ldev MBPS - Frontend Ldev Avg Read Response Ldev Avg Write Response Ldev Total Tracks - Backend Default metric Yes/No Yes Yes Yes Yes Yes Yes Detail pane > Associated Components Host Ports CA Ports Journals Displays all the ports where the PVOL is presented to host, in other words, when a host port is assigned to a P-VOL. Displays all the ports that are assigned for the continuous access activity. It may be CA Sync, Async or Journal. Displays the Journal that is associated with the selected Continuous Access record. NOTE: Refer Metric Category, metrics, and descriptions on page 336 for list of all metrics per component and description. Journals About Journals For Continuous Access operations, journal volumes help you manage data consistency between multiple P-VOLs and S-VOLs. A journal consists of two or more data volumes and journal volumes. You use journals to create multiple pairs and to split, resynchronize, and release multiple pairs. Journals are used in Continuous Access to guarantee data consistency across multiple pairs, and are required on the primary and secondary systems. The status for a Journal is based on the Journal Status and the RPO metric. You can access Journals screen using the main menu, or drill down from the Overview and Continuos Access levels of dashboard. If the RPO metric crosses the defined threshold value, the Journal and the array on which it is configured is flagged as critical. Use the associated component tabs to monitor the associated Continuous Access and the LDEVs on the same screen. 128 Journals

129 Use the Template feature for frequent monitoring of performance metrics, and foresee performance issues in advance. Templates can provide an at-a-glance information on the behavior of critical components and metrics. You can also monitor real-time data, plot trend values for all the charts, and plot forecast lines for critical metrics to predict behavior within a specified time frame. Journals screen details Screen elements Description Filters Arrays Status Journal Name Displays all the arrays that PA monitors. Displays the status of the CA journal pair. Displays the journal name. Master pane details Journal ID Each pair relationship between journals is called a "mirror". A mirror ID identifies a pair relationship between journals. When the pair is created, it is assigned a mirror ID. Example: JID-mirror ID. Table Continued Journals screen details 129

130 Screen elements Journal Status Description State of the journal group, can be one of the following: JSTAT_SMPL: The journal volume that is configured but not assigned to a pair JSTAT_NONE: The specified JID does not exist JSTAT_P(S)JNN: P(S)vol Journal Normal Normal JSTAT_P(S)JSN: P(S)vol Journal Suspend Normal JSTAT_PJNF: P(S)vol Journal Normal Full JSTAT_P(S)JSF: P(S)vol Journal Suspend Full JSTAT_P(S)JSE: P(S)vol Journal Suspend Error including link failure. For more information on the journal group, refer the following: RPO (sec) The difference between the data write times for the primary and secondary volumes. It is represented in terms of seconds. Detail pane > Chart View: Metrics RPO Journal Utilization Journal Pvol Throughput Journal Async Transfer Rate The difference between the data write times for the primary and secondary volumes. It is represented in terms of seconds. Displays the utilization rate of the journal. Displays the throughput rate in MB/s for the P-VOL journal. The average transfer rate (MB/sec) for journals in the storage system. Table Continued 130 Charts

131 Screen elements Journal Write IOPS Journal RIO Response Time Description Total write I/O per second of the P-VOL based on the selected journal ID. The remote I/O average response time (msec) on the storage system. Detail pane > Associated Components Continuous Access LDEV Displays the CA records that are associated with the selected journal. Displays the LDEVs that are associated with the selected journal. For a description of these metrics, see Metric Category, metrics, and descriptions. Use charts Use charts to monitor performances of various components of a disk array. You can plot performance graphs to view historical data of components that belong to the same or different XP disk arrays and XP7 disk arrays. Graphical representation of components performance metrics is especially useful when you want to compare similar components of different XP and XP7 disk arrays to determine their performance and observe trends. Auto Update charts Procedure Navigate to the required component screen and select Auto Update option. The latest data points are appended to the utilization and the performance charts without manually refreshing the screen whenever the collection is performed. Latest data points are added to the right side of the charts. Clear the Auto Update option before plotting real time, forecasting, or trending graphs for a component or a feature. If you want to remove old data points from the charts, then: 1. From HPE XP7 Performance Advisor, click PA Settings. 2. Click Edit in the Realtime Update Settings pane. 3. Select the Append data points and Auto Flush old data points for the charts checkbox, and click Update. 4. Click Save to close the Edit Realtime/Auto Update screen. View 50th, 90th and 95th percentile value in charts The metrics may display transient peaks beyond the average performance. Sizing to these peaks can lead to enormous bandwidth provisioning and cost. Percentiles are an effective way to exclude the impact of these transient spikes in the calculation of the bandwidth requirements. To estimate the required bandwidth and other parameters for metrics, you can view percentile values for the respective metrics in the Chart View pane. Percentile is symmetrical to percentage. Percentage measures the 'how many' in the whole. Percentiles in a chart indicates the percentage of data points that are above and below a given value. Use charts 131

132 For instance, if the 50th percentile value for the Maximum Port MB-Frontend metric for a Port component displays a value of 348, this indicates that 50% of data points are above 348, and 50% are below 348. Similarly, if the 90th percentile value displays a value of 354, it indicates that 90% of the data points are below 354. In addition to providing data pertaining to the maximum MBs through that port, PA also displays the 50th, 90th and 95th percentile values based on the performance data over a given duration for the port component. NOTE: The graphs show the performance data for the last one hour. If there is no performance data for the selected component/feature during the last one hour or the average read, and write values are zero, the 50th, 90th and 95th percentile values are not displayed. Prerequisites The configuration and the performance data collection for the selected array is complete. Procedure 1. From the HPE XP7 Performance Advisor main menu, navigate to the component or the feature screen, as required. For example, from the main menu, navigate to the Ports screen. 2. From the Array list, select an array, and then select a required component from the master pane. The corresponding performance graph is displayed in the Chart View pane. 3. To view the 50th, 90th and 95th percentile values of a metric, navigate to the graph and hover over it. The values are displayed in the chart as a tool tip. About Real-time monitoring PA does real-time monitoring of the XP/XP7 disk arrays, where performance data is collected for intervals as low as few seconds (approximately, 10 seconds per component). The real-time data is collected through the following PA host agents that support the real-time performance data collection: Windows hosts Linux hosts Solaris hosts HP-UX hosts AIX hosts When you install the host agent, the real-time server is also automatically installed on the host agent. For more information on the host agent installation, refer to the HPE XP7 Performance Advisor Software Installation Guide. You can collect the real-time performance data for LDEVs, RAID Groups, ports, cache, Pools, Processors, Host Groups in an array. This data is collected for a set of real-time metrics that PA 132 About Real-time monitoring

133 supports. When you start a real-time performance data collection for an XP or an XP7 disk array, the following sequence of steps follows: 1. The associated host agent collects real-time performance data from the disk array and sends the data to PA. 2. PA plots a graph of the real-time performance data points. This process continues until you click Reset. The real-time also stops if you attempt to perform a new action using the Actions menu, or navigate to a different screen or when you select a new record from the master pane. IMPORTANT: Real-time monitoring is supported for the XP7 disk arrays and the following XP disk array models: P9500, XP24000, XP Real-time monitoring of metrics is made available in the following components and features in XP/XP7 arrays: Ports, Host Groups, Processors, Cache, LDEVs, RAID Groups, THP/SMART Pools. PA maintains the configuration data for an XP or an XP7 disk array on the management station. The real-time server also maintains the same data on the PA host. Ensure that both these instances of configuration data are the latest. By default, the real-time server on the PA host agent uses port 8331 to communicate with the management station. This port number is stored in the xprmihaserver.properties file. On the management station, the xprmihaserver.properties file is located in the hpss/pa/properties folder. On the host agent, the xprmihaserver.properties file is located in the xppa/ realtime/config folder. However, if you want to use a different port, complete the following steps: 1. On the management station, open the xprmihaserver.properties file in a text editor. 2. Update the new port number or the default port number in the Port.Number field. 3. Restart the PA service on the management station. 4. On the PA host agent, open the xprmihaserver.properties file in a text editor and repeat step Restart the PA host agent service. You can also restore the default port setting to 8331 using the above-mentioned steps. Start/Stop real-time performance data collection Prerequisites HPE recommends that you dedicate a command device for the real-time performance data collection, so that it is not used by PA for the regular configuration or performance data collection. Ensure that Auto Update option is disabled. Perform the following steps to ensure that the same configuration data for the selected XP/XP7 disk array is available on both the host agent real-time server and PA: Start/Stop real-time performance data collection 133

134 Procedure 1. Request a host update. 2. Initiate the configuration data collection from PA, so that the latest configuration data is available for the XP/XP7 disk array. 3. Configure the real-time database by setting a host agent and a command device for the array. Starting real-time data collection 1. From the HPE XP7 Performance Advisor main menu, navigate to a component screen, Actions > Real Time. This will start the real-time plotting of data for all the metrics that support the real-time feature among the selected metrics. The icon Realtime... appears in the Date/time filter pane, indicating the real-time charting of performance data has begun. 2. To plot real-time data for more metrics, go to Actions > Select Metrics, and add metrics that support real-time. NOTE: To stop real-time data collection, click Reset. This feature is enabled only in certain metrics, for the metrics that do not have real-time monitoring, the graphs will normally plot the historical data points. The real-time performance graphs are not displayed for components that are currently not processing any I/O requests (described as unused components). 3. If you want to monitor the real-time data of individual metrics, then navigate to the required metrics in Chart View, right-click, and select Real Time. You can monitor one or more metrics in this manner. IMPORTANT: You can also initiate real-time feature for supporting metrics from the Templates screen. Stopping real-time data collection To stop monitoring real-time data, click Reset adjacent to the Date/time filters. NOTE: The real time feature also stops when you select a new component in the master pane or navigate to a different screen in the main menu. About trending and forecasting Chart the performance characteristics of a disk array over a specified time interval, by using trending and forecasting option. These charts helps to recognize patterns and anomalies associated with specific time when a certain activity takes place. The forecasting can be for a day, a week, or a month based on the current data points. The trending and forecasting is supported for following metrics: Maximum Port IO - Frontend Average Port IO - Frontend 134 About trending and forecasting

135 Maximum Port MB - Frontend Average Port MB - Frontend Port Hourly Throughputt Port Daily Throughput Cache Writes Pending Utilization CLPR MPB Usage Utilization Cache Usage MB Average MP Blade Utilization Pool utilization Pool Total IOPS Pool Average Read Response Time Pool Average Write Response Time Raid Group Utilization Ldev Total IOPS Ldev Total MBPS Ldev Average Read Response Time Ldev Average Write Response Time Plot trending graphs Prerequisites Ensure that the performance chart supports trending option. Ensure that sufficient data is plotted for trending in the performance chart. You must have minimum of 12 data points for trending option. Ensure that Auto Update option is disabled. Procedure 1. From HPE XP7 Performance Advisor, select a component under Components. 2. Perform one of the following: To view trending lines for single performance chart, right-click the performance chart and select Trend on the Chart View screen. To view trending lines for all performance charts, click Actions > Trend. Plot trending graphs 135

136 Plot forecasting graphs Prerequisites Ensure that the performance chart supports forecasting option. Ensure that sufficient data is plotted for forecast in the performance chart. You must have minium of one day of data for daily forecast option, one week of data for weekly forecast, and one month for monthly forecast option. Ensure that Auto Update option is disabled. Procedure 1. From HPE XP7 Performance Advisor, select a component under Components. 2. To view forecasting lines for single performance chart, right-click the performance chart and select among Daily Forecast, Weekly Forecast, or Monthly Forecast on the Chart View screen. 3. To view forecasting lines for all performance charts, click Actions. Select among Daily Forecast, Weekly Forecast, or Monthly Forecast. For the historic view, click Reset on the Chart View screen. NOTE: Plot Port hourly and daily throughput forecast charts separately. You can view negative forecast values when there is a dip in chart values near to zero. The following components do not support forecasting option: Host Groups Journals Save charts as PDF or CSV files PA enables you to save individual as well as the entire set of charts that are added in the Chart View. Save all charts in the chart work area 1. Navigate to the required component screen, from the Actions menu, click Select Metrics and add the desired metrics in the Chart View pane. 2. To save the charts, navigate to Actions > Save As, and then select the format as PDF or CSV. The chart is generated and you are prompted to save the file on the local machine. To remove any chart from the PDF or CSV output, go to Actions > Select Metrics > Metrics Category, and clear the check box against the desired metric. Click Ok. Click Save As again to generate the new output. Save individual charts 136 Plot forecasting graphs

137 1. In the Chart View pane, right-click on the chart that you want to save. 2. Click Save As, and select the format as PDF or CSV. NOTE: The individual chart is saved by the metric name. If you want to export all charts in the selected component, then the chart is saved by the component name. Set alert on array components from charts You can configure and enable alerts on a component from the performance charts. Procedure 1. Navigate to the required component screen. Either choose the default plotted chart or click Actions > Select Metrics, and add the desired metrics in the Chart View pane. 2. To configure and enable an alert, right-click the chart for which you want to enable alert, and select Add Alert(s). 3. On the Add Alter(s) screen, enter the threshold value and the number of occurrences for that event. 4. To set the Destinations or SNMP Destination to receive the alert notifications, click Select More. You can also enter the Script File destination. 5. Click Add. charts as PDF/CSV PA enables you to individual as well as the entire set of charts that are added in the Chart View. PA chooses a system-defined configuration such as the Microsoft Outlook to send charts as attachments. ing all charts in the chart work area Prerequisites You must set up a default ing system to charts. Procedure 1. Navigate to the required component screen, from the Actions menu, click Select Metrics and add the desired metrics in the Chart View pane. 2. To the charts, navigate to Actions > Send As, and then select the format as PDF or CSV. 3. In the Address page, enter a valid destination, and click Send. The chart is generated and you are prompted to save the file on the local machine. You must manually attach the file when the default ing application opens. 4. Locate the file, and to attach the file, click Attach File. 5. Click Send. Set alert on array components from charts 137

138 ing individual charts 1. In the Chart View pane, right-click on the chart that you want to Click Send As, and then select the format as PDF or CSV. 3. Follow the same steps described as above in steps 3 and 4. Zoom in on data points across performance graphs NOTE: In addition to zooming in on data points for a particular duration, you can also zoom in on a combination of data points in the chart window. Hold down the mouse button, and then drag the pointer across the data points that you want to focus on. The chart window displays the focused set of data points, and the slider in the zoom panel automatically shift focus to the selected set of data points. In the following set of images, the first image displays the data points being focused across the performance graphs. The second image displays the focused area with data points with the Zoom panel updated accordingly. 138 Zoom in on data points across performance graphs

139 To zoom out, scroll and drag the sliders in the Zoom panel back to the left and the right extremes. NOTE: The zoom option gets reset soon as the real-time data points are updated in the graph. Rearrange or move chart windows To move or rearrange chart windows in the Chart View pane, click in the title bar of the chart window that you want to move and holding down the left mouse button, drag and drop that chart over an existing chart, where you want the new chart to be placed in the Chart View pane. The existing chart window automatically shifts to accommodate the relocated chart window. The following are a few use cases on when you might want to rearrange charts in the chart work area: If you want to compare performance graphs of components plotted for metrics that belong to different metric categories If you have performance graphs of components for related metrics plotted in different chart windows Templates About templates PA enables you to create and store chart templates for quick retrieval. Using the templates feature, you can save various components and metrics that you frequently monitor as template charts, and monitor all of it in a single page. This allows you to compare data of different component types. You can also create and generate reports for the templates that you have created. Whenever you want to view the performance graphs for the same set of components and metrics, load the corresponding template chart in the Templates screen. Hence, you need not select the same combination of components and metrics again to plot the performance metrics. The template charts provide a template framework, where you can continuously append new components and metrics to the existing list. By default, a template chart when selected displays the historical charts in the current duration for the last 1 hour time interval. To filter data within a date and time range, see Using date and time filters. Rearrange or move chart windows 139

140 You can also enable real-time plotting for all the charts on a template, export the data in PDF and CSV formats. NOTE: To view the components and metrics that you want to frequently monitor in the Templates screen, you must first add, and save the components and metrics as templates. Reuse/Apply a template You can create a chart template for metrics that you frequently monitor. PA also enables you to apply this template on a different array of your choice instead of recreating a chart with the same set of metrics. Prerequisites An existing template which you want to recreate on a different array. Procedure 1. From the HPE XP7 Performance Advisor main menu, navigate to Templates. 2. Select the desired template from the Templates filter. All the templates which you create on component/features screens are displayed in the filter. 3. Click Apply this Template. 4. From the drop-down filter in the Apply Template page, select an array to apply the chosen template. 5. Click Apply. A new template is created, and the array serial number is appended to the original template name. For example, CL3-A_ NOTE: While you can reuse a template created for a certain array, the new template may display blank charts if the metrics are not part of the selected array. You cannot apply a template which contains data of multiple arrays on any one array. To add more metrics to an existing template, select the desired metrics from the components/ features screens. You can add a single metric to a template by right-clicking on the metric and select Save as > Template. In the Save Template dialog, select Existing Template (also the default selection), and choose the desired template from the drop-down filter, and click OK. The options to perform real-time charting, trend and forecast, and data export of the saved template are enabled using the Actions menu. You can also right-click an individual chart to apply Trending and Forecasting, Realtime monitoring, and and save charts options. Save template charts Procedure 1. Navigate to the component page as required, and select the record(s). In the Chart View pane, using the Actions menu, choose the metrics that you want to plot data. The graphs for the selected metrics are plotted in the Chart View. 140 Reuse/Apply a template

141 NOTE: You can choose a combination of components and metrics, which you want to save as template charts. 2. Navigate to Actions > Save As > Save Template. The Save Template page opens. 3. To create a new template, click New Template. 4. In the Template Name box, provide a name for the template, and click Ok. Template name must not exceed 24 characters. 5. To view the template which you have saved, navigate to the Templates screen. 6. From the Templates menu, choose the template you have saved. You can view the performance graphs of selected metrics, and monitor the behavior of multiple components and arrays in one screen. To add more metrics to an existing template, select the desired metrics from the components/ features screens. You can add a single metric to a template by right-clicking on the metric and select Save As > Template. In the Save Template dialog, select Existing Template (also the default selection), and choose the desired template from the drop-down filter, and click OK. IMPORTANT: Template name must not exceed 24 characters. Using the template charts, you can only save components and metrics combination. The data in the charts at the time of creating templates are not saved. Each template accommodates components and metrics that belong to the same metric category. The components and metrics that belong to a different metric category are automatically considered as a separate template chart request. If you save the new set of components and metrics (belonging to the same or different metric categories) with an existing template chart name, the new set is automatically appended to the existing set in the template chart. When you load that template chart, the charts are segregated based on the metric categories. You can also save the new set of components and metrics as a separate template. NOTE: In addition to viewing your template charts, you can also generate, save, or schedule reports for template charts. Modify or delete a chart template Procedure 1. From the HPE XP7 Performance Advisor menu, click Templates. 2. From the Templates filter, select the template to modify or delete. 3. Click Delete Template. 4. Do one of the following: Modify or delete a chart template 141

142 To delete required metrics, click Delete Selected Metrics, and then click the check box against the metrics which you want to remove from the template. To delete the entire template, click Delete Entire Template. 5. Click OK. Monitor associated components Monitoring data by logically connecting similar component types is important to identify, investigate, and find the origin of performance issues. This logical association of components enables navigation through different levels of component types to select and view performance graphs of specific components. The following table lists the supported associations: Component Ports Host Groups Cache Processors LDEVs Raid Groups THP/SMART Pools Continuous Access Journal Tier Host View Associated components Host Groups, LDEVs Ports, LDEVs Raid Groups, Processors LDEVs, Cache Raid Groups, Host Groups, Ports, Processors, Journals, Pools LDEVs, Cache, Pools Vvols, Raid Groups, Tier Host Ports, CA Ports, Journals Journals Volumes, Continuous Access Pools Host Groups, LDEVs, Ports When you click an associated component tab, the top associated components are displayed based set criteria for that component. The following table lists these criteria: Component Ports Host Groups Cache Processors LDEVs Raid Groups Criteria Average Frontend IOPS and Average Frontend MBPS metrics Host Groups IOPS, Host Groups MBPS, Host Groups Average Read Response Time, and Host Groups Average Write Response Time metrics Cache Usage Utilization, Cache Write Pending Utilization metrics. MP Blade Utilization metric Average Read Response Time and Average Write Rest Times metrics Overall Raid Groups Utilization Table Continued 142 Monitor associated components

143 THP/SMART Pools Continuous Access Journals Tiers Pool Average Read Response Time and Pool Average Write Response Time metrics Pair status Journal status and RPO metric Tier IOPS metric You can view the associated components for component record which you select from the master pane. The associated component is displayed with a subset of records that are associated with the main component. For example, the associated components for Ports are: Host Groups> individual host group >LDEVs> individual LDEV. If you notice that the response time of a particular LDEV is high, drill down to the associated ports to view their performance metrics for the duration when the LDEV response time is found to be high. Based on your requirement, select the components to view their performance graphs for related metrics in the detail pane. View associated components Procedure 1. From the HPE XP7 Performance Advisor, navigate to a component screen. Alternatively, you can drill down to a component screen from the dashboard. 2. Select one or multiple components, and perform one of the following: To view the performance graphs of an associated component on the same screen, click an associated component tab. To view the performance graphs of an associated component with more details, right-click the component and click Association link. Select an associated component from the Association Link menu. The associated component screen appears which displays the performance, configuration, status of the associated component. This information is filtered according to the selection of the main component records. For example, when you want to view the associate LDEVs for selected ports in the Ports screen. Select the ports, right-click the ports, and select LDEV from Association Link menu. You transverse to LDEV screen which displays the information related to the selected ports. Set top X components for associated tabs Follow the steps to set the number of top X associated components that are displayed in a component tab: Procedure 1. From the HPE XP7 Performance Advisor, navigate to a component screen. Alternatively, you can drill down to a component screen from the dashboards. 2. Select an associated component tab and click Actions > Association Settings. 3. Enter the valid number between 1 to 50, on the Associated Settings dialog box, and click Ok. This change is reflected only for that tab. Each component tab displays maximum of top 50 components. To view next and previous set of top X components, click the arrows. View associated components 143

144 Select metrics for associated component tab You can select the metrics to view the desired performance charts for an associated component. For example, if you are on the Ports screen, you can select the desired metrics to view performance charts for LDEVs. Procedure 1. From the HPE XP7 Performance Advisor, navigate to a component screen. Alternatively, you can drill down to a component screen from the dashboards. 2. Select an associated component tab and click Actions > Select Mertics. 3. On the Select Metrics dialog box, Component Name displays the selected associated component. To change the component, select another component from the list. 4. Select a metric from the Metric Category list and select the desired metrics. 5. Click Ok. View historic charts in the same screen by using Group by metric option Use Group By Metric option to view the historic charts of the associated components in the same screen. Procedure 1. From the HPE XP7 Performance Advisor, navigate to a component screen. Alternatively, you can drill down to a component screen from the dashboards. 2. Select one or multiple components on the main pane. 3. Select Group by Metric option in the description pane. The associated components tabs appear as buttons. 4. To view the performance charts on the same screen, click an associated component button. The main component is selected by default. 5. To set the metrics for the associated component, perform the following: a. Click Actions > Select Metrics. b. On the Select Metrics dialog box, select a component in the Component Name list. c. Select a metric in the Metric Category list. d. Select the check boxes for the required metrics, and click Ok. NOTE: When you update any metrics for a component, the changes are reflected for the component across PA. For example, if you select the metrics for the Ldevs from the Ports screen, the changes are reflected for the Ldevs in the Ldev screen, Host group screen and so on. 6. To set the top X number for the associated component, perform the following: 144 Select metrics for associated component tab

145 a. Click Actions > Associated Settings. b. Enter an integer in the Top X Associated Component, and click Ok. c. If the number provided by you is less than the total number of the available associated components, then use the arrow buttons to view next set of components. 7. By default, only Custom option is enabled for duration. Either retain the default values, or enter the start date, end date, and time. 8. Click Apply. The performance charts for the associated component appear in the chart work area. These plotted charts are grouped and sorted based on the metric category. Also, the total number of components that are plotted in charts appear above the time and date filter. When you click an associated component button again, the performance charts for that associated components are removed from the chart work area. For example, if you want to view the performance charts for the top 10 associated LDEVs on the Ports screen, select a port from the main pane. Select Group By Metric option and provide the duration. Click Ldev button and the performance charts of the top 10 associated LDEVs for the selected port appear in the chart work area. Charts 145

146 PA Settings About PA Settings The PA Settings screen enables you to configure the commonly used settings, such as the following: Registration of the arrays with the respective array SVPs notification for the alerts and reports generation, and the data collection failure Configure database size Severity level for logging the events, which PA uses to filter events and log only those that match the set severity level Management station date and time to be in sync with the time zone where it resides User-friendly names for the arrays (array alias) Forecast array performance, and configure real-time database. IMPORTANT: You must log on to PA as an administrator or a user with administrator privileges to perform the above-mentioned tasks. However, the administrator privileges are not required to manage the custom groups and the fabricated LDEV records. You can also configure the following specific settings: Set the duration that PA uses to predict the average read and write response time of the LDEVs. Configure notification settings to receive notifications from the PA Monitor service, which periodically monitors the statuses of the PA services. PA Settings screen details Screen element Description Register/Save SVP Credentials Array Array IP Address Registered Array User Name Array serial number. The valid IP address of the array. The IP address of SVP registered in PA. SVP user name. settings SMTP Server Settings Table Continued 146 PA Settings

147 Screen element IP Address / Hostname SMTP Port Source Address SMTP Authentication Alert Settings Address Description IP address or host name of the SMTP server that will be used for processing s. The default SMTP server IP address is localhost. Related port number (accepts only numbers). The default port number is 25. Source address to dispatch all the alert and report notifications, and the performance data collection failure notifications. Displayed that if the authentication is enabled or disabled. A valid destination address that transforms into one of the following formats: <alphanumeric_string>@<character_string>.<cha racter_string>. For example, abc123_abc@xyz.com <alphanumeric_string>@<character_string>.<cha racter_string>.<character_string>. For example, abc123_abc.123@xyz.co.in The default destination address for receiving the alert notifications is administrator@localhost Subject Good Info Alert Title Good Info Alert Flag An appropriate subject text for the alert notifications. PA uses the specified subject line as the default subject line for all the notifications that are dispatched when a component is performing beyond the set threshold limit. The default subject line is XP7 Alert. An appropriate title text for the Good Information alert (recovery alert) notifications. PA uses the specified title as the default title for all the recovery alert notifications. These notifications are dispatched when the performance of a component drops below the set threshold limit for the first time. The default title is Good Information Alert. Displays the Good Info Alert Flag check box to receive the respective Good Information Alert notifications. By default, PA dispatches the Good Information Alert notifications. Reports Settings Address A valid destination address as specified under Alert Settings. Table Continued PA Settings 147

148 Screen element Subject Report Name Customer Name Consultant Name Array Location Description An appropriate subject text for the report notifications. PA uses the specified subject text as the default text for all the notifications that are dispatched when a report is generated as scheduled. The default subject line is XP7 Performance Report. Set the report name. The name of the customer for whom the report is generated. The name of the consultant who is associated with the customer. The location of the XP/XP7 disk array for which the report is generated. This information is useful if the XP/XP7 disk array is located in a different site, away from the management station. Data Collection Settings Address Subject Notify Data Collection Failure A valid destination address, as specified under Alert Settings. An appropriate subject text for the data collection failure notifications. PA uses the specified subject text as the default for all the notifications that are dispatched whenever a performance data collection fails. The default subject line is XP7 Data Collection Failed. Option to whether to receive the data collection failure notifications. By default, PA does not dispatch the data collection failure notifications. SNMP Settings IP Address / Hostname SNMP Community Name The IP address or host name of the SMTP server. Community name (Public or Private) for the SNMP server Performance Advisor Monitor Settings Address Subject Recipient address to which PA Management Station service failure notification is sent. Subject line for PA Management Station service failure notification. DB Configuration Current database size Configured Maximum Database Size The current database size for PA on the Management Station. You can increase the PA database size based on the disk space available on the management station, where PA is installed. The maximum database size that you can configure for PA database. Table Continued 148 PA Settings

149 Screen element Disk Space available on drive where Database exists Description The remaining disk space on the drive where PA database is installed. User Settings Event Log > Log events severity TimeZone > Current TimeZone Data Analysis > Response Time (in Days) Option to set severity level for events generated in the Event log. Option to select a time zone for your management station. Option to set the duration to predict the average read and write response time of LDEVs. Array Alias Names Setting Array Alias Name Displays the list of all the arrays managed by PA. Displays the corresponding alias name of the arrays. Forecast Setting Daily Forecast Period Weekly Forecast Period Monthly Forecast Period Displays the daily forecast duration. Displays the weekly forecast duration. Displays the monthly forecast duration. Realtime Update Settings Array Hostagent Command Devices Auto Update/Realtime Chart Settings Append data points and Auto Flush old data points for the chart Displays the arrays monitored by PA. Displays the host gent name configured for each array. Displays the command devices mapped to the respective host agents. Option to set whether to remove the old data points from the left side of the charts and append new data points on the right side of the charts. Displays the following values: Yes No Dashboard Settings Dashboard Duration Top X Component Value Displays the current duration set for all the dashboard. Displays the number of top components to be displayed for all the dashboards. About saving and registering SVP Credentials You can register the Service Processor (SVP) of an array with the respective management station that has PA monitoring these disk arrays. It is required if you want PA to directly collect data from the array through the array SVP. Service processor is a notebook computer built into an HPE XP7 Storage system About saving and registering SVP Credentials 149

150 that hosts the Remote Web Console software and used to configure and maintain the storage system. The SVP provides a direct interface to the disk array and used only by the HPE service representative. The registration process is unique to each management station, where only one instance of the registration is possible from every management station. To complete the registration for an array SVP, provide the IP address of that SVP. After registration, the IP address is automatically available for that array when you initiate an outband mode (The outband mode uses the TCP/IP to directly connect to the virtual command device in the SVP of an array, and collect the configuration data through the SVP. Using the outband mode ensures that the performance of the SAN is not affected because of the data collection) of configuration collection. IMPORTANT: You must log on to PA as an administrator or a user with administrator privileges to perform the earlier mentioned tasks. However, the administrator privileges are not required to manage the custom groups and the fabricated LDEV records. For an XP7/P9500/XP24000 Disk Array, the IP address of the management station is also registered with the array SVP. For an P9500/XP7, it is recommended that you maintain separate SVP login credentials, which you can use for outband mode of configuration data collection. On a few occasions, the SVP IP address registration can fail for an array, due to the following reasons: The SVP is offline or locked by another user The IP address does not belong to the selected array SVP If you upgrade PA from or later versions, and the SVP registration data already exists for the XP24000 array, it is automatically available in the newer version of PA when the PA Tomcat service starts. So, the SVP registration process need not be repeated. If the upgrade fails for some reason, the SVP registration data is still available in the existing version from where you had planned the upgrade. The registration process may take some time. After you provide the array IP address, and click Save & Register, wait till PA displays a confirmation that the registration is complete. Save/Register SVP credentials in PA Procedure 1. From the HPE XP7 Performance Advisor main menu, navigate to PA Settings. 2. In the Register/Save SVP Credentials pane, click Edit. 3. From the Array list, select the array for which you want to register/save the SVP credentials. 4. In the Array IP Address, type the SVP IP address of the disk array. 5. In the SVP User Name and SVP Password boxes, type the user name and password respectively. 6. To save the credentials, click Save & Register. 150 Save/Register SVP credentials in PA

151 The SVP IP address, user name, and password are saved in PA database. PA also uses these credentials to validate the connection with the specific disk array. PA first saves and then registers the credentials. NOTE: On a few occasions, the SVP IP address, user name, and password are not saved. It might be because the SVP is offline. Wait for a few minutes and try again. PA automatically uses the SVP IP address every time you initiate a configuration data collection for the selected XP disk array. The Array IP address and SVP user name and password are used whenever you enable authentication to initiate configuration and performance data collection for the selected XP7 disk array. Click Reset if you want to clear the current entries in the fields and re-enter new data. About Settings You can configure PA to dispatch notification when the following events occur: Reports are generated on schedule Performance of components cross the set threshold limits Performance of components drop below the set threshold limits for the first time Performance data collection fails The settings that you specify for PA to dispatch the notifications comprises of the following: Providing the source SMTP server IP address or host name and the port address Specifying the source address for the alerts and reports notifications, and the data collection failure notifications Specifying a separate destination addresses for the alerts, reports, and the data collection failure notifications Specifying a community name (Public or Private) for the SNMP server Specifying separate subject lines for the report notifications, XP7 Alerts notifications, and the data collection failure notifications Specifying an appropriate title for the Good Information alert notifications About Settings 151

152 IMPORTANT: The Address is a mandatory field to generate alerts and reports. Provide a valid destination address that receives the notifications when the alerts and reports are generated, or the performance data collection fails. For example, test1@xyz.com You can also provide multiple addresses by inserting a semi colon between the addresses, in the following format test1@xyz.com;test2@xyz.com;test3@xyz.com. The total count of characters in the Address field must be less than or equal to 512 characters. However, if you specify a new address while configuring the alerts or creating the reports in the respective screens, it is used only with the current set of alert or report records for which it is provided. The new address does not supersede the existing address provided on the Settings screen. PA Monitor Settings does not support multiple addresses. If multiple addresses are to be notified then you must provide an alias address. The default values for the settings are directly read from the serverparameters.properties file and displayed in the respective fields on the Settings screen. If you retain the default values, PA uses them for all the notifications that are dispatched to the intended recipients. Configure SMTP server settings Prerequisites To receive notifications on the status of tomcat and database services, you must configure the SMTP parameters: Procedure 1. From the HPE XP7 Performance Advisor main menu, navigate to PA Settings, and in the Settings pane, click Edit. 2. In SMTP Server Settings, provide the IP Address / Hostname of the SMTP server that will be used for processing s. The default SMTP server IP address is localhost. 3. In the SMTP Port, specify the related port number (accepts only numbers). The default port number is In the User Name/ Address, specify a common source address to dispatch all the alert and report notifications, and the performance data collection failure notifications. The default source address for dispatching these notifications is administrator@localhost. 5. If SMTP server is configured with authentication, select the Require Authentication check box. The default address is administrator@hpe.com. Enter the password in Password field which appears after selecting the check box. 6. To validate the SMTP server settings, enter an address in the Test Address and click Test SMTP. If the SMTP data entered is valid, a test mail is sent to the specified address, and the following message is displayed at the bottom of the PA Settings screen: Valid SMTP settings If the SMTP data entered is invalid, the following message is displayed Invalid SMTP settings 7. Click Save. 152 Configure SMTP server settings

153 Configure alert settings Prerequisites A valid source address, and IP and port addresses of the SMTP servers are specified. PA uses the specified SMTP server details to dispatch notifications to the intended recipients. Specify a community name (Public or Private) for the source SNMP server in the SNMP Community Name field in the SNMP Settings section. By default, Public is used as the community name. Procedure 1. From the HPE XP7 Performance Advisor main menu, navigate to PA Settings, hover over the Settings, and then click Edit. 2. In the Alert Settings, type a valid destination address. 3. In the Subject, provide an appropriate subject text for the alert notifications. For example, set the default subject line as XP7 Alert, P9000 Alert, and so on. 4. In the Good Info Alert Title, provide an appropriate title text for the Good Information alert (recovery alert) notifications. The default title is Good Information Alert. 5. Select the Good Info Alert Flag check box to receive the respective Good Information Alert notifications. 6. Click Save. IMPORTANT: By default, PA dispatches Good Information alert notifications. However, if it is disabled, you must enable the Good Info Alert Flag check box on the Settings screen to receive the XP7 Alert - Good Information Alert notifications. Configure reports settings Procedure 1. From the HPE XP7 Performance Advisor main menu, navigate to PA Settings, and in the Settings pane, click Edit. 2. Scroll down to the Reports Settings section, provide a valid destination address as specified above in the Alert Settings section. 3. In the Subject, provide an appropriate subject text for the report notifications. PA uses the specified subject text as the default text for all the notifications that are dispatched when a report is generated as scheduled. The default subject line is XP7 Performance Report. 4. In the Report Name, type the name of the report you want to generate. 5. In the Customer Name, type the name of the customer for whom the report is generated. 6. In the Consultant Name, type the name of the consultant who is associated with the customer. Configure alert settings 153

154 7. In the Array Location, type the location of the XP/XP7 disk array for which the report is generated. This information is useful if the XP/XP7 disk array is located in a different site, away from the management station. 8. Click Save. Configure data collection settings Procedure 1. From the HPE XP7 Performance Advisor main menu, navigate to PA Settings, and the Settings pane, click Edit. 2. Scroll down to the Data Collection Settings section, and in the address box, provide a valid destination address as specified in the Alert Settings section above. 3. In the Subject, provide an appropriate subject line for the data collection failure notifications. PA uses the specified subject text as the default for all the notifications that are dispatched whenever a performance data collection fails. For example, XP7 Data Collection Failed. 4. Select the Notify Data Collection Failure check box to receive the data collection failure notifications. By default, PA does not dispatch the data collection failure notifications. 5. Click Save. Configure SNMP settings In the event of a failure or an abnormal issue detected in a storage system, the SNMP Agent can report the condition using a trap message. PA provides two destinations that you can configure for receiving such alerts notifications destination and SNMP destination. Procedure 1. From the HPE XP7 Performance Advisor main menu, navigate to PA Settings > Settings, and click Edit. 2. Scroll down to the SNMP Settings section, and in IP Address / Hostname, provide the IP address of the SNMP server that will be used for processing s. 3. In SNMP Community Name, provide the community name. The purpose of specifying an SNMP community name is to separate s that belong to a particular group (community). You can specify only two community names: Public (default) or Private. 4. To validate the SNMP server settings, click Test SNMP. If the SNMP server address entered is valid, the following message is displayed: Trap dispatched to the SNMP server If the SNMP server address entered is invalid or the SNMP server is not accessible, the following message is displayed: Trap not dispatched. It might be due to invalid IP address or server name, or the SNMP server is not accessible. 5. Click Save. The settings are updated in the serverparameters.properties file. If you click Cancel, the previous specified values are retained in the serverparameters.properties file. 154 Configure data collection settings

155 Configure PA Monitor Settings Prerequisites You must configure the SMTP parameters for the PA Monitor Settings service to dispatch notifications to the intended recipients. A valid destination address must be specified. Performance Advisor Management Station Monitor service periodically monitors the PA Management Station Tomcat service, and dispatches appropriate notifications to the specified address that provides the status of the service. Ensure that the SMTP settings are configured to receive notifications on the status of Tomcat and database services. Procedure 1. From the HPE XP7 Performance Advisor main menu, navigate to PA Settings, and in the Settings pane, click Edit. 2. In Address, type a valid destination address. 3. In Subject, provide an appropriate subject line. For example, PA MS Service Failure Notification. Performance Advisor Management Station Monitor service periodically monitors the PA Management Station Tomcat service, and dispatches appropriate notifications to the specified address that provides the status of the service. Manually configure database size Prerequisites Before allocating the disk space, verify the available disk space. In the Purge pane in Purge/Archive screen, a label named Disk Space Available displays the available disk space. During installation of PA, the configured maximum database size is set to 150 GB by default. You can modify the size based on the available disk space after installation of PA. If the database has occupied x% of the maximum configured database size, where x is the threshold value specified in purgeparameters.properties file or if the available disk space is less than y GB, where y is the disk space value specified in purgeparameters.properties file, then automatic deletion of oldest records is initiated in the database. The default threshold value is 90% and default disk space value is 3 GB. If you change this value in purgeparameters.properties file, then the new value is considered for auto purge. If the database size increases, you cannot revert the configured maximum database size to a value lesser than the current database size. Procedure 1. From the HPE XP7 Performance Advisor main menu, navigate to PA Settings, and in the DB Configuration pane, click Edit. 2. In the DB Configuration page, scroll the slider on either side to choose the disk space size that you want to allocate. Configure PA Monitor Settings 155

156 Configured maximum database size is set to 150 GB by default. The maximum disk space that can be allocated is 500 GB. 3. Click Save. The Configured Maximum Database Size reflects the new allocated database size. The Disk Space available on drive where Database exists displays the remaining available system disk space in the drive where database exists. About User Settings You can perform the following in the User settings pane: Setting the... Severity level for events Description Provides the option to set the severity level for the events that are logged in the Event log screen. Only those events that match the specified severity level are displayed on the Event Log screen. The following are the three types of severity levels: Severity level User Action System Error Critical Error Description Errors for user-instigated activities, like if the user deletes a performance data collection schedule. Exception errors given by PA. Critical errors, where PA may not function. Time zone for management station Duration to predict the LDEV response time Displays the option to choose the time zone for your management station. This ensures that the management station is synchronized with the time zone where it resides. Displays the option to set the duration that PA must use to predict the average read and write response time of LDEVs. Set the severity level for events Procedure 1. From the HPE XP7 Performance Advisor main menu, click PA Settings > User Settings. 2. Click Edit. 3. From the Log all events with severity at drop-down list in the Event Log section, set the desired severity level. The options are All, User Action, System Error, and Critical Error. For example, if you select the severity level as User Action, only messages with that severity level appear in the Event Log screen. 4. Click Save. 156 About User Settings

157 After this setting is saved, the events generated are filtered and only those matching the specified severity level are displayed on the Event Log screen. NOTE: This change affects only those messages that are created after you instigated the severity change. All messages that were logged before you set the severity level still remain in the PA database, and appear on the Event Log screen. Set the time zone for management station Procedure 1. From the HPE XP7 Performance Advisor main menu, click PA Settings > User Settings. 2. Click Edit. 3. In the TimeZone Settings section, select the appropriate time zone from the TimeZone list. By default, the TimeZone displays the local time zone where the management station resides is displayed. 4. Click Save to update the time zone details on the management station. CAUTION: Ensure that the date and time on the management station and hosts are synchronized with the local time zone to receive accurate configuration data. This condition is also applicable for the client systems that use the IE browser to access PA on a management station, and systems that have the CLUI software installed. Set the duration to predict the LDEV response time Procedure 1. From the HPE XP7 Performance Advisor main menu, click PA Settings > User Settings. 2. Click Edit. 3. In the Data Analysis Settings section, from the Avg. Read/Write Response Time Analysis Duration (in Days) drop-down list, select the duration. You can select a maximum of seven days. By default, PA considers a duration of two days for the prediction. 4. Click the Save button on the Data Analysis section to update the prediction duration. In the Save page, click OK. PA does the following: 1. Analyzes the average read and write response time for the LDEVs that belong to all the arrays, or a combination of arrays. 2. Displays a on the Troubleshooting screen for the LDEVs that have a peak load in their average read and write responses. Set alias name for arrays You can view all available arrays and their alias names. You can change or set alias name for an array. This name is displayed across all the screens. Set the time zone for management station 157

158 NOTE: You can use all the special characters in the alias name except /and \. Procedure 1. From HPE XP7 Performance Advisor, clickpa Settings > Array Alias Names. 2. Click Edit. 3. Enter a name in the Alias Name field on the Edit Array Alias Setting screen. You can set or change the alias names for multiple arrays at a time. 4. Click Save. Manage forecast settings Use this option to change the default settings for the number of days, week, or month for next forecast data to plot performance charts. Procedure 1. From HPE XP7 Performance Advisor, click PA Settings. 2. To set forecast settings of an array, hover over the Forecast Settings, and click Edit. 3. Enter a value between 1 to 7 in one or all the following fields: Daily Forecast Period Weekly Forecast Period Monthly Forecast Period NOTE: Enter the higher number to get the accurate forecast for array performance. Update real-time database Procedure 1. From the HPE XP7 Performance Advisor main menu, click PA Settings > Realtime Update Settings. 2. Click Edit. 3. In the Update Realtime Database page, update the host agent, and the command device. 4. To initiate real-time monitoring, click Save and Update. Updating the real-time server lasts for about minutes. However, you can check the status of the action (as Successful or Failed) in the Event Log screen. NOTE: Click Save if you only want to save the configuration in the database. This does not enable real time charting for metrics. When the update is complete, the following message appears: 158 Manage forecast settings

159 Realtime Server update is successful for array <arrayname> for HA <Host name>. You can then proceed with the real time data collection and plotting of charts. Ensure that the above-mentioned steps are performed before you initiate a real-time performance data collection for the arrays. IMPORTANT: Ideally, if an array is connected to two host agents, configure separate real-time data collection through each of the host agents. At a time, ensure that only one instance of PA collects the real-time performance data from an array through the respective host agent. If there have been configuration changes on the XP/XP7 disk array for which you want to collect the real-time performance data, the following informational message appears when you select components and start plotting the real-time graphs: The <XP or XP7 disk array> configuration data available in the Real Time Server is not in sync with the configuration data available on Performance Advisor. This could occur due to the following reasons: Command device is invalid, <XP or XP7 disk array> is no longer connected, or selected components are not available on the selected <XP or XP7 disk array>. Set the dashboard duration and the number of top components Procedure 1. From HPE XP7 Performance Advisor, click PA Settings. 2. Click Edit. 3. On Dashboard Setting dialog box, select the duration in the Dashboard Duration and enter the number in Top X Component value. This number must be multiple of 10 and a value between 10 to Click Ok. These changes are reflected across array level dashboards. Receive notifications when PA services fail The PA service periodically monitors the statuses of the following services and accordingly notifies the intended recipients: PA Tomcat service PA Database service PA Database Listener service To receive notifications, you must configure certain SMTP parameters. Set the dashboard duration and the number of top components 159

160 IMPORTANT: The PA Monitor service does not monitor the PA Security service. Notification for PA Tomcat service failure During the course of monitoring, if the PA service identifies that the PA Tomcat service has abruptly stopped or failed to start, it does the following: 1. Attempts to restart the PA Tomcat service 'n' number of times, where 'n' indicates the retry count that is specified. By default, the retry count is set to five, which means that five attempts are made to restart the PA Tomcat service before a notification is dispatched. For more information on specifying the retry count, see Configure retry count. 2. Intimates the intended recipients on the success or failure of the restart attempts, such as the following: Tomcat Server was not running on management station, Restart of the service was successful after 'n' attempt. Tomcat Server was not running on management station, Restart of the service was not successful after 'n' attempt. In such a case, the appropriate reason for the failure is logged in the paservicesstatus.log file located in the <Install_drive>:\HPSS\paMonitor\logs folder. You must contact HPE Support for further assistance along with the paservicesstatus.log file. If the PA Tomcat service is manually stopped, the PA Monitor service does not take any action, which includes not sending any notifications. The following messages appear in the jakarta_service_<yyyymmdd>.log file located in the <hppss_home>\pa\tomcat\logs folder if the PA Tomcat service is manually stopped. These messages do not appear if the service has abruptly stopped: <Date_time>[info] Stopping service... <Date_time>[info] Service stopped... <Date_time>[info] Run service finished... <Date_time>[info] Procrun finished... The following illustration depicts the above mentioned conditions: 160 PA Settings

161 NOTE: If you configure the SMTP parameters but do not specify a retry count, the PA Monitor service does not attempt to restart the PA Tomcat service. Also, it does not dispatch any notification to the intended recipients. If you do not configure the SMTP parameters but specify the retry count, the PA Monitor service attempts to restart the PA Tomcat service. But, notification is not dispatched as the recipients are not configured. In such a case, if the PA Tomcat service fails to restart, manually check the status logged in the paservicesstatus.log file. Notification for PA Database and Database Listener services During the course of monitoring, if the PA service identifies that the PA Database and Database Listener services are manually stopped or have abruptly stopped or failed to start, it notifies the intended recipients. However, it does not attempt to restart these services in case of failure. You must manually restart the services and contact HPE Support for further assistance if the services do not restart. The following illustration depicts the description. PA Settings 161

162 Following are the notifications that are dispatched: PA Database is not running on management station. PA Database Listener is not running on management station. NOTE: If you do not configure the SMTP parameters, the PA Monitor service does not dispatch any notification to the intended recipients. Configure retry count IMPORTANT: Applicable only for PA Tomcat service. Specify the number of times PA Monitor service must attempt to start PA Tomcat service in the PAMonitor.properties file located in the <Install_drive>:\HPSS\paMonitor\conf folder on your management station. #number of times restart to be retried retrycount=5 Custom Groups PA enables you to plot charts, and monitor the performance of the associated LDEVs using the Custom Groups screen in the main menu. You can view the performance of specific LDEV metrics and duration of your choice. From Custom Group filter menu, select a custom group to view the plot performance graphs of the associated LDEVs. You can also perform the following actions from the Actions menu: Real-time monitoring of data, trend and forecast, save CG templates, export, and charts. To create, view or modify custom groups, navigate to Summary View, and from the Component filter, select Custom Group. About Custom Groups PA enables you to create custom groups, where you add the LDEVs that you want to monitor frequently. You can create a custom group that has multiple LDEVs from different XP and XP7 disk arrays. After the configuration collection is complete for the disk arrays, the associated LDEV IDs and their details are displayed on the Custom Groups screen. 162 Custom Groups

163 The Custom Groups screen appears when you click Summary View > Custom Group. Click Create, and you can scroll through the list of records on the Custom Groups screen to select LDEV records and add them to a custom group. You can filter the list based on the associated ACP pairs, ports, and the RAID Groups. In addition, you can also filter the LDEV records based on array, host, or a combination of them to view only those LDEV records that match your specific requirement. For example, LDEV records can be filtered based on the following combination: ACP pairs, Hosts, and RGs. After you create a custom group, you can: View the performance summary of the associated LDEVs. View a graphical representation of the associated LDEVs performance for specific LDEV metric and duration of your choice. Configure alerts on the associated LDEVs, so that PA monitors and sends appropriate notifications to intended recipients. For more information, see Configure threshold limits for XP and XP7 disk arrays. The following are important notes on custom groups: The LDEVs associated with multiple RAID Groups or multiple ACPs are treated as separate group of items. For example, if you have an LDEV associated with the RAID Group , you must select in the RAID Groups list. An LDEV mapped to the RAID Group is treated separately from an LDEV mapped only to the RAID Group 1-1 or 1-2. Each page on the Custom Group screen displays 150 LDEV records. The selection of records on the current page is retained when you navigate to other pages. The custom groups uniquely identify the LDEVs based on the following LUN attributes: Host Groups RAID Groups Host IDs LUSE If there is a configuration change in the above-mentioned LUN attributes, edit the custom groups to add the corresponding LDEV records again. It ensures that you view the updated data on the LDEVs and the associated LUN attributes. If you group the LDEVs by host groups and then modify the name of the host group, delete and recreate the custom groups. Significance of creating custom groups The following are few examples that signify use of custom groups: Continuous Access Synchronous is installed on an XP24000 array (primary storage server) to create a secondary copy of the production data. The production data is located on the primary volume (P-VOL) in the same XP24000 Disk Array. The secondary copy is residing on the secondary volume (S-VOL) in an XP12000 Disk Array. The database server is located on a P-VOL in an XP24000 Disk Array and the data is replicated onto two S-VOLs. One S-VOL is located within the XP24000 Disk Array and the data is backed up using the XP7 Business Copy. The other S-VOL is located on a remote XP disk array and the data is backed up using the XP7 Continuous Access Synchronous. PA Settings 163

164 You can create two custom groups to group all the LDEVs that belong to both the P-VOLs and S-VOLs, so that you can monitor only the selected LDEVs. By creating the custom groups, you can: View and analyze the performance trends of only the selected LDEVs (irrespective of the XP and the XP7 disk arrays or the CUs they belong to). Plot a graph for a metric of your choice and view a graphical representation of the LDEVs performance for that particular metric when the workload was maximum. Configure thresholds on the LDEVs and generate alerts, if the performance values of these LDEVs go beyond the set threshold level. Create custom groups 1. From the HPE XP7 Performance Advisor main menu, click Summary View > Custom Group. 2. Click Actions > Create Custom Group. The Create Custom Group screen appears displaying the list of LDEVs and its associated array components in the Custom Groups table. 3. Select the LDEV records for which you want to create a custom group. While selecting the records, use the Ctrl key for selecting multiple component records. 4. In the Custom Group Name text box, type a name. You can enter a maximum of 24 alphanumeric characters that includes underscore (_). Special characters, such as the hyphen (-) and comma (,) are not allowed. 5. Click Create, and in the conformation page, click OK. The selected set of LDEV records are included in the custom group and the new custom group is listed in the Custom Group filter. You can view the custom group details by clicking Actions > View Custom Group. Using custom group filters You can also use the following custom group filters to view specific set of LDEV records in the Custom Groups table: Arrays ACPs Hosts Ports RGs The selection in each filter is independent of the selection in other filters. Example, if a P9500 array is selected from the Arrays list, the Ports and ACPs lists are not updated to display only the ports and ACPs that belong to array P9500. The filter will still display all the ports and the ACPs that belong to all the monitored XP and XP7 disk arrays. To filter and view specific set of LDEV records: 164 Create custom groups

165 1. Select the values from the above-mentioned custom group filters, and click Apply. The existing set of LDEV records are filtered based on the filter criteria and displayed in the Custom Groups table. Click Clear, if you want to restore the default settings across the Custom Groups filters. This action also displays all the LDEV records in the Custom Groups table. The Custom Groups screen is refreshed when you click Clear. 2. Provide a name for the custom group, and click Create Custom Group if you want to save the custom group. View Custom Groups Procedure 1. From the HPE XP7 Performance Advisor main menu, click Summary View > Component > Custom Group. 2. Under the Custom Group filter, select the custom group that you want to view. 3. Click Actions > View Custom Group. All the LDEVs which are part of the selected custom group are displayed in the View Custom Group page. Table 13: View Custom Group screen details Screen elements DKC Array Type Host ID Device File Port Port Type SLPR Description Displays the IDs of the selected XP and XP7 disk arrays. Displays the type of the selected XP disk array (for example, XP24000, XP7 or P9500). Displays the IDs for the selected hosts. Displays the XP disk array device file names that are pointing to the selected host. If an XP disk array is connected to a host that has a host agent installed for HP-UX 11i v3 operating system, the DSF is displayed in a new format. A legacy DSF is displayed in parentheses next to the new format. Displays the identification numbers of the selected ports. The port type, such as Fibre or FCoE (applicable only for XP7 disk arrays). A disk array can be shared with the multiple organizations and with multiple departments within an enterprise. Use Disk/ Cache Partition to allocate all components of one disk array (all ports and CLPRs) to virtual disk arrays called SLPRs. Table Continued View Custom Groups 165

166 Screen elements CLPR LDEV LUSE Status Description CLPRs are disk arrays cache memory, partitioned to use as virtual cache memory across multiple hosts or applications. The cache memory stores the read and write information. It is controlled as two areas, one half in the CL1 and the other half in the CL2. During a power outage, the information in the cache is retained through a battery backup. However, in the newer array models, a forced destage can occur prior to that XP/XP7 disk array powering off, depending on the batteries, configuration, and so on. Displays the identification numbers of the selected logical devices. Displays one of the following for a selected LDEV: Blank field = Not a LUSE M = A LUSE master C = A LUSE component port NOTE: LUSE is not supported for XP7 arrays LUSE master Ext-LUN Displays the LDEV ID of the LUSE master, if the selected LDEV is a LUSE component. If the LDEV is not a LUSE component, this field is blank. Displays the following options to indicate whether or not the selected LDEV is an Ext-LUN (Ext-LDEV): - (hyphen) = Normal LUN E = Ext-Lun P = Ext-Lun provider (the selected LDEV is used as an Ext- LUN for another XP or XP7 disk array). Host Group ACP Pairs RG Jnl Displays the host group name for the host. The host group name is a user-defined group on an XP or an XP7 disk array. Displays the selected ACP pairs. Displays the selected RAID Groups. Displays the identification numbers of the continuous access journal groups. 166 PA Settings

167 Modify custom groups Procedure 1. From the HPE XP7 Performance Advisor main menu, click Summary View > Custom Group 2. From the Custom Group filter, select the desired custom group. 3. Click Actions > Edit Custom Group. 4. To add more LDEVs to a custom group: a. In the LDEVs table, select the LDEVs that you want to add to the custom group. Alternatively, use the Custom Groups filter to locate the LDEVs that you want to add to a custom group. For more information on using filters, see Create Custom Groups. b. Select the required LDEVs from the table, and click Add. Refresh the page to view the LDEVs that are added to the custom group. Delete LDEVs/Custom Groups Procedure 1. From the HPE XP7 Performance Advisor main menu, click Summary View > Custom Group. 2. From the Custom Group filter, select the desired custom group. 3. To delete LDEVs from the custom group, click Actions > View Custom Group. 4. From the table, select the LDEVs that you want to delete, and click Delete. While selecting the LDEVs, press the Ctrl key for random selection of multiple LDEVs. Refresh the page to reflect the changes. To delete a custom group, select the custom group from the filter, and click Actions > Delete Custom Group. Create custom groups 1. From the HPE XP7 Performance Advisor main menu, click Summary View > Custom Group. 2. Click Actions > Create Custom Group. The Create Custom Group screen appears displaying the list of LDEVs and its associated array components in the Custom Groups table. 3. Select the LDEV records for which you want to create a custom group. While selecting the records, use the Ctrl key for selecting multiple component records. 4. In the Custom Group Name text box, type a name. You can enter a maximum of 24 alphanumeric characters that includes underscore (_). Special characters, such as the hyphen (-) and comma (,) are not allowed. 5. Click Create, and in the conformation page, click OK. Modify custom groups 167

168 The selected set of LDEV records are included in the custom group and the new custom group is listed in the Custom Group filter. You can view the custom group details by clicking Actions > View Custom Group. Using custom group filters You can also use the following custom group filters to view specific set of LDEV records in the Custom Groups table: Arrays ACPs Hosts Ports RGs The selection in each filter is independent of the selection in other filters. Example, if a P9500 array is selected from the Arrays list, the Ports and ACPs lists are not updated to display only the ports and ACPs that belong to array P9500. The filter will still display all the ports and the ACPs that belong to all the monitored XP and XP7 disk arrays. To filter and view specific set of LDEV records: 1. Select the values from the above-mentioned custom group filters, and click Apply. The existing set of LDEV records are filtered based on the filter criteria and displayed in the Custom Groups table. Click Clear, if you want to restore the default settings across the Custom Groups filters. This action also displays all the LDEV records in the Custom Groups table. The Custom Groups screen is refreshed when you click Clear. 2. Provide a name for the custom group, and click Create Custom Group if you want to save the custom group. View Custom Groups Procedure 1. From the HPE XP7 Performance Advisor main menu, click Summary View > Component > Custom Group. 2. Under the Custom Group filter, select the custom group that you want to view. 3. Click Actions > View Custom Group. All the LDEVs which are part of the selected custom group are displayed in the View Custom Group page. 168 View Custom Groups

169 Table 14: View Custom Group screen details Screen elements DKC Array Type Host ID Device File Port Port Type SLPR CLPR LDEV LUSE Status Description Displays the IDs of the selected XP and XP7 disk arrays. Displays the type of the selected XP disk array (for example, XP24000, XP7 or P9500). Displays the IDs for the selected hosts. Displays the XP disk array device file names that are pointing to the selected host. If an XP disk array is connected to a host that has a host agent installed for HP-UX 11i v3 operating system, the DSF is displayed in a new format. A legacy DSF is displayed in parentheses next to the new format. Displays the identification numbers of the selected ports. The port type, such as Fibre or FCoE (applicable only for XP7 disk arrays). A disk array can be shared with the multiple organizations and with multiple departments within an enterprise. Use Disk/ Cache Partition to allocate all components of one disk array (all ports and CLPRs) to virtual disk arrays called SLPRs. CLPRs are disk arrays cache memory, partitioned to use as virtual cache memory across multiple hosts or applications. The cache memory stores the read and write information. It is controlled as two areas, one half in the CL1 and the other half in the CL2. During a power outage, the information in the cache is retained through a battery backup. However, in the newer array models, a forced destage can occur prior to that XP/XP7 disk array powering off, depending on the batteries, configuration, and so on. Displays the identification numbers of the selected logical devices. Displays one of the following for a selected LDEV: Blank field = Not a LUSE M = A LUSE master C = A LUSE component port NOTE: LUSE is not supported for XP7 arrays Table Continued PA Settings 169

170 Screen elements LUSE master Ext-LUN Description Displays the LDEV ID of the LUSE master, if the selected LDEV is a LUSE component. If the LDEV is not a LUSE component, this field is blank. Displays the following options to indicate whether or not the selected LDEV is an Ext-LUN (Ext-LDEV): - (hyphen) = Normal LUN E = Ext-Lun P = Ext-Lun provider (the selected LDEV is used as an Ext- LUN for another XP or XP7 disk array). Host Group ACP Pairs RG Jnl Displays the host group name for the host. The host group name is a user-defined group on an XP or an XP7 disk array. Displays the selected ACP pairs. Displays the selected RAID Groups. Displays the identification numbers of the continuous access journal groups. Modify custom groups Procedure 1. From the HPE XP7 Performance Advisor main menu, click Summary View > Custom Group 2. From the Custom Group filter, select the desired custom group. 3. Click Actions > Edit Custom Group. 4. To add more LDEVs to a custom group: a. In the LDEVs table, select the LDEVs that you want to add to the custom group. Alternatively, use the Custom Groups filter to locate the LDEVs that you want to add to a custom group. For more information on using filters, see Create Custom Groups. b. Select the required LDEVs from the table, and click Add. Refresh the page to view the LDEVs that are added to the custom group. 170 Modify custom groups

171 Delete LDEVs/Custom Groups Users Procedure 1. From the HPE XP7 Performance Advisor main menu, click Summary View > Custom Group. 2. From the Custom Group filter, select the desired custom group. 3. To delete LDEVs from the custom group, click Actions > View Custom Group. 4. From the table, select the LDEVs that you want to delete, and click Delete. While selecting the LDEVs, press the Ctrl key for random selection of multiple LDEVs. Refresh the page to reflect the changes. To delete a custom group, select the custom group from the filter, and click Actions > Delete Custom Group. About Users PA enables you to create user records, change password, delete user records, and view group properties using the Users screen. After PA is installed with the authentication type selected as Native, login as an Administrator, create user accounts, and grant them privileges (administrator or user privileges). You can also login as a storageadmin, who is an administrator user of Command View Advanced Edition Suite Software (CV AE) and has the same privileges as the administrator user of PA. The Security screen displays users who are authorized to use PA and their groups. IMPORTANT: If you see the link to Users enabled in the HPE XP7 Performance Advisor main menu, it implies that the PA Native authentication is selected as the user authentication method during the PA installation. This link is disabled for users who log on to PA using their system or domain login credentials. About creating and editing user profiles You can perform the following functions if you login with the administrator privileges. This is to enhance the security of the management station: Add and remove users. Change passwords for other users who have the administrator or the user privileges. Users with administrator privileges cannot change the password for the default PA Administrator account. You can modify the password for the default PA Administrator account only if you login as the default PA Administrator. Change membership information for PA users. A general user cannot delete self account, only the account password can be modified. Delete LDEVs/Custom Groups 171

172 Create user records Procedure 1. From the HPE XP7 Performance Advisor main menu, click Users. 2. From the Actions menu, click Add User. 3. In the New User page, provide the following details: In the Attributes section, the name of the new user and a brief description about the user profile. A password. Confirm the password. Assign the user to a group. The Select a Group drop-down list displays Administrators and StorageAdmins (read and write access), and Users (read access) privileges. 4. To create the user, click Create. Change password Procedure 1. From the HPE XP7 Performance Advisor main menu, click Users. 2. From the Users table, select the user record type. 3. From the Actions menu, click Change Password. 4. To change the password for your profile, in the Change Password page, type the existing password, new password, and type the password again in the Confirm Password box. If you are changing the password for another user profile, you are prompted to only provide the new password and reconfirm the new password. NOTE: The password that you provide must not exceed more than 32 characters. 5. Click OK. The PA database is updated with the new password for the selected user record. Delete user records Procedure 1. From the HPE XP7 Performance Advisor main menu, click Users. 2. From the Users table, select the type of user record that you want to remove. 3. From the Actions menu, click Delete User. 172 Create user records

173 View group properties Procedure 1. From the HPE XP7 Performance Advisor main menu, click Users. 2. In the Groups pane, you can view all the user records created in the PA database. The following details are displayed in the Groups pane: The group name and its brief description. The names of the users who are members of the selected group NOTE: You cannot access the following screens if you are logged in with User privileges: settings Threshold settings Users Configure Alert Security (only modify your password) Purge/Archive Reports (cannot schedule reports) Export DB (cannot schedule Export DB) License (cannot remove license). View group properties 173

174 Configure and manage alerts Threshold settings About Threshold settings Threshold level is a value that you set and is used by PA to compare the current performance value of a selected component with the set threshold value. The health status for components are dependent on the thresholds that you set for the corresponding metrics in the Threshold Setting screen. The status for components change to the critical state if performance values for the related metrics cross the threshold level at least once, the status changes to Warning if performance values have crossed 95% of the threshold value but is within 100% of the threshold value, the components appears in Normal state if performance values for the corresponding metrics are less than 95% of threshold values set. If you have not set the threshold limit for even one metric for an array, the Unknown status icon appears for that array in the dashboard. The Threshold Setting screen displays the primary metrics of an array when launched. Select the More Threshold option to view all the metrics. You can also configure and enable alerts on array components using this screen. PA provides two destinations that you can configure for receiving alerts notification, destination and SNMP destination. The displays that the alerts are configured and enabled for an array. In addition, the average usage summary for components is also derived from the set threshold duration and verified against the threshold limits set for metrics in the particular category. Thereafter, the statistics are displayed on the dashboard screens. IMPORTANT: The threshold values are local to a management station. When you edit these values from a management station (in decoupling installation scenario where you have installed PA Management Station and PA database on separate systems), the updated threshold values are not reflected across all the other management stations. If you have already configured an alert for the component whose threshold value is changed, the alert threshold value will be updated across all management stations and alerts will trigger for the new set values. You can view the updated threshold values for the alerts in the Alerts screen. The average usage of components is monitored for those categories where threshold limits are set for the corresponding metrics. If either XP or XP7 disk arrays are monitored, only the threshold settings table for the monitored array type is displayed. You can plot the graphs for the usage details for components in the chart work area in the detail pane for individual components, and also view the X busiest components in the array level dashboard. Threshold settings screen details for XP and P9500/XP7 disk arrays IMPORTANT: If any of the below metrics exceeds the defined threshold limit, the status for these metrics changes to Critical. As a result, the status of the corresponding component also changes to critical. 174 Configure and manage alerts

175 Screen element Array Array Type More Threshold Description Array serial number. Type of array. Selecting this option displays all the threshold metrics and their corresponding threshold values. When you click this icon, the Edit Threshold Setting screen appears, which enables you to edit the metric threshold and configure alerts. Alert Displays the following: : All the threshold alerts configured for an array. : No threshold alerts are enabled for the array. SA Port IOPS - Frontend Displays the System Alerts and their default values. These alerts are automatically configured on starting performance collection for an array. Indicates the average I/Os that you define an individual port can handle over a specified duration. PA uses this value to verify whether the average I/Os on each port is within or beyond the set threshold. Table Continued Configure and manage alerts 175

176 Screen element Port MBPS - Frontend CHA Util (%) Description Indicates the average MB/s that you define an individual port can handle over a specified duration. PA uses this value to verify whether the average MB/s value on each port is within or beyond the set threshold. Indicates the average overall MPs utilization on an installed CHA. PA uses this value to verify whether the average overall utilization of each CHA MP is within or beyond the set threshold. The default threshold value is 65%. NOTE: The CHA Util (%) metric is applicable for the CHA MPs only on the XP disk arrays. Cache Usage Util (%) Cache Write Pending Util (%) DKA Util (%) Indicates the total cache that you define to be utilized for both the frontend and backend transactions over the threshold duration. PA uses this value to verify whether the total cache usage is within or beyond the set threshold limit. The frontend transactions comprise data transfers between the cache and the frontend ports. The backend transactions comprise data transfers between the cache and the LDEVs through the DKAs. The cache utilization is in Ok status when the utilization value is more than 95% of the threshold limit, and when the utilization value is less than 95% of threshold limit, then the metric is flagged as Critical. Indicates the Write pending metric comprises the number of writes that are pending to be written from the cache to the LDEVs. The default threshold value is 50%. The Write Pending (%) threshold value indicates the total number of write operations that you define can be pending with the cache over the threshold duration. PA uses this value to verify whether the total write pending operations are within or beyond the set threshold limit. Indicates the average overall utilization of the MPs that can be utilized on an installed DKA over the threshold duration. PA uses this value to verify whether the average overall utilization of each installed DKA is within or beyond the set threshold. The default threshold value is 50%. NOTE: The DKA Util (%) metric is applicable for the DKA MPs only on the XP disk arrays. Table Continued 176 Configure and manage alerts

177 Screen element RG Seq Reads - Backend Tracks Description Indicates the average sequential backend read tracks that you define an individual RAID Group can manage over the threshold duration. Track is defined as the slot used by the LDEVs. The slot size for the different emulations is as follows: Open-V emulation For XP1024 arrays: 64 KB slot size For XP12K or XP24K or P9500 arrays: 256 KB slot size Other emulations like the Open-X : 48 KB slot size PA uses this value to verify whether the average sequential I/Os on each RAID Group is within or beyond the set threshold limit. RG NonSeq Reads - Backend Tracks RG Writes - Backend Tracks RG Util (%) Indicates the average non-sequential backend read tracks that you define an individual RAID Group can manage over the threshold duration. PA uses this value to verify whether the average non-sequential I/Os on each RAID Group is within or beyond the set threshold limit. Indicates the average sequential backend write tracks that you define an individual RAID Group can manage over the threshold duration. PA uses this value to compare whether the average writes (I/Os) on each RAID Group is within or beyond the set threshold limit. Indicates the average overall RAID Group utilization that you define for an individual RAID Group over the threshold duration. PA uses this value to verify whether the average overall RAID Group utilization of each RAID Group is within or beyond the set threshold limit. The default threshold value is 50%. If the utilization of one RAID Group exceeds the defined threshold, the status icon changes to Critical Table Continued Configure and manage alerts 177

178 Screen element Host Group Avg Response Time (msec) Host Group IOPS - Frontend Host Group MBPS - Frontend Pool Capacity Utilization Description Includes both average read response time and average write response time of an individual Host Group. Indicates the time that is the average response time of LDEVs that are configured for a host group. Its is the time taken in millisecond for the read and write IOs from the time the read and write commands are received at the array host port to the time the array has returned a good status frame to the host indicating a successful completion of the read and write commands after reception of the data from the host. The data transfer time, which relies on the switch buffer availability to send data frames to the array, is included in the response time. PA uses this value to verify whether the average read response time on each Host Group is within or beyond the set threshold. The status is updated according to the value with respect to threshold set. If the average read response time on a Host Group exceeds the defined threshold limit, the status icon changes to Critical. The default threshold value is 5 msec. Indicates the Total I/Os that defines an individual Host Group can handle over a specified duration. PA uses this value to verify whether the Total I/Os on each Host Group is within or beyond the set threshold. The status is updated according to the value with respect to threshold set. If the Total I/Os on a Host Group exceeds the defined threshold limit, the status icon changes to Critical. Indicates the Total MB/s that you define an individual Host Group can handle over a specified duration. PA uses this value to verify whether the Total MB/s on each Host Group is within or beyond the set threshold. The status is updated according to the value with respect to threshold set. Indicates the proportion (%) of the used capacity of the pool to the total capacity of the pool. PA uses this value to verify whether the overall capacity utilization of each pool is within or beyond the set threshold. The default threshold value is 80%. Table Continued 178 Configure and manage alerts

179 Screen element Pool Avg Read Response Time (msec) Pool Avg Write Response Time (msec) Description Indicates the time that is the average read response time of LDEVs that are configured for a Pool. It is time taken in millisecond for read IOs from the time the read command is received at the array host port to the time the array has returned a good status frame to the host indicating a successful completion of the read command after reception of the data from the host. The data transfer time, which relies on the switch buffer availability to send data frames to the array, is included in the read response time. PA uses this value to verify whether the average read response time on each pool is within or beyond threshold. The status is updated according to the value with respect to threshold set. If the average read response time on a pool exceeds the defined threshold limit, then the status icon changes to Critical. The default threshold value is 5 msec. Indicates the time that is the average response time of LDEVs that are configured for a Pool. It is time taken in millisecond for write IOs from the time the write command is received at the array host port to the time the array has returned a good status frame to the host indicating a successful completion of the write command after reception of the data from the host. The data transfer time, which relies on the switch buffer availability to send data frames to the array, is included in the write response time. PA uses this value to verify whether the average write response time on each pool is within or beyond the set threshold. The status is updated according to the value with respect to threshold set. If the average write response time on a pool exceeds the defined threshold limit, then the status icon changes to Critical. The default threshold value is 5 msec. Table Continued Configure and manage alerts 179

180 Screen element CA PVOL Avg Response Time (msec) CA Recovery Point Objective (secs) LDEV Average Read Response (msec) LDEV Average Write Response (msec) Description Includes both average read response time and average write response of an individual PVOL. Indicates the time that is the average response time of a LDEV that is configured as a PVOL. It is the average time taken in millisecond for the read and write IOs from the time the read and write commands are received at the array host port to the time the array has returned a good status frame to the host indicating a successful completion of the read and write commands after reception of the data from the host. The data transfer time, which relies on the switch buffer availability to send data frames to the array, is included in the response time. PA uses this value to verify whether the average read response time on each PVOL is within or beyond the set threshold. The status is updated according to the value with respect to threshold set. If the average read response time on a PVOL exceeds the defined threshold limit, the status icon changes to Critical. The default threshold value is 5 msec. Indicates the CA recovery point objective in seconds on each LDEV for a defined threshold duration. The default threshold value is 5 sec. Indicates the average time taken in millisecond for read IOs from the time the read command is received at the array host port to the time the array has returned a good status frame to the host indicating a successful completion of the command after transferring the data. The data transfer time, which relies on the switch buffer availability to receive data frames, is included in the read response time. PA uses this value to verify whether the average read response time on each LDEV is within or beyond the set threshold. The status is updated according to the value with respect to threshold set. The default threshold value is 5 msec. Indicates the average time taken in terms of millisecond for write IOs from the time the write command is received at the array host port to the time the array has returned a good status frame to the host indicating a successful completion of the write command after reception of the data from the host. The data transfer time, which relies on the switch buffer availability to send data frames to the array, is included in the write response time. PA uses this value to verify whether the average write response time on each LDEV is within or beyond the set threshold. The status is updated according to the value with respect to threshold set. If the average write response time on a LDEV exceeds the defined threshold limit, the status icon changes to Critical. The default threshold value is 5 msec. Table Continued 180 Configure and manage alerts

181 Screen element MP Blade Util(%) Description Indicates the average overall utilization of the MPs that can be utilized on an installed MP Blade over the threshold duration. PA uses this value to verify whether the average overall utilization of each installed MP Blade is within or beyond the set threshold limit. MP Blade Util(%) The default threshold value is 60%. NOTE: The MP Blade Util (%) metric is applicable for MP Blades on the P9500/XP7 disk arrays. Pool Tier IOPS - Frontend LDEV IOPS LDEV MBPS Indicates the I/Os that defines an individual tier can handle over a specified duration. PA uses this value to verify whether the I/Os on each tier is within or beyond the set threshold. The status is updated according to the value with respect to threshold set. If the I/Os on a tier exceeds the defined threshold limit, the status icon changes to Critical. Indicates the average I/Os that you define for an individual LDEV a specified duration. PA uses this value to verify whether the average I/Os handled on each LDEV is within or beyond the set threshold. Indicates the average MB/s that you define for an individual LDEV over a specified duration. PA uses this value to verify whether the average MB/s happening on each LDEV is within or beyond the set threshold. Set threshold limits for XP and XP7 disk arrays from Threshold Setting screen Procedure 1. From the HPE XP7 Performance Advisor main menu, click Threshold Settings. The Threshold Settings screen appears displaying the following: If the XP and XP7 disk arrays are monitored, the Threshold Setting for XP disk array and the Threshold Settings for P9500 disk array tables are displayed. NOTE: The P9500 Disk Array belongs to the XP7 disk array family. Only the primary metrics are displayed. Use More Thresholds option to view all the metrics. 2. Perform one of the following: Set threshold limits for XP and XP7 disk arrays from Threshold Setting screen 181

182 Click the edit icon against the array that you want to edit the threshold limits and to configure the alerts in the XP or P9500/XP7 disk array. Enter the threshold value in the text boxes for each metrics, select the check boxes against a metric, and click Save. After clicking Save, the threshold values are saved and only one line item is created for each of the metric and all the components are tracked as a part of that metric. Click Reset to revert all the threshold values to the original recommended PA values for an array. When you define the threshold limits, PA verifies the usage of components against the set threshold limits. Accordingly, the appropriate status icons and the average usage summary values are displayed on the Dashboard screen. If the threshold limit is not set or if it is set and later deleted without entering any value, - (dash) appears in the metric text box. IMPORTANT: Use integers as threshold values, not decimal numbers. You can specify the threshold limit for individual categories or for all the categories based on the requirement. There is no maximum limit on the threshold limits. PA displays the default threshold values for the following metrics. You can retain these values or enter new values for your environment: CHA Util (%): 65% Cache Write Pending Util (%): 50% DKA Util (%): 50% RG Util (%): 50% Pool Capacity Util (%): 80% MP Blade Util (%): 60% The MP Blade Util (%) metric is applicable only for XP7 disk arrays. So, the (Unknown) status icon appears for the XP disk arrays in the Processors category on the Dashboard screen. Host Group Avg Response Time (msec): 5 ms Pool Read Avg Response Time (msec): 5 ms Pool Write Avg Response Time (msec): 5 ms CA Recovery Point Objective (sec): 5 s CA PVOL Avg Response Time (msec): 5 ms LDEV Avg Read Response (msec): 5 ms LDEV Avg Write Response (msec): 5 ms PA does the following: 182 Configure and manage alerts

183 1. The threshold line (red dotted line) appears on the charts only for the metrics that has its values defined in the Threshold Settings screen for the respective arrays. 2. The status for components is updated as Critical, Warning, or Ok. Importing threshold values from a different array. Use Copy option on Edit Threshold Settings screen, when you want to import the threshold values of an array to another array of same model that are managed by PA. Prerequisites Ensure you have administrative privileges. Ensure that both the arrays must be of same model. Procedure 1. From HPE XP7 Performance Advisor main menu, click Threshold Setting. 2. Click the edit icon against the array that you want to edit the threshold limits in the XP or P9500/XP7 disk array. 3. On the Edit Threshold Settings screen, click Copy. 4. Select an array in for and from lists. 5. Click Copy. Enable or disable alerts from PA setting screen Prerequisites Ensure to log in as an administrator or user with administrator privileges. Ensure that the SNTP /SNMP settings are configured. Procedure 1. From the HPE XP7 Performance Advisor main menu, and click Threshold Settings. 2. On the Threshold Setting screen, click edit icon against that array. 3. To enable an alert, select thethreshold Alert Selection checkbox against the metrics on the Edit threshold Setting screen. If you want to remove an alert, clear the Threshold Alert Selection checkbox. 4. Click Save. Alerts Importing threshold values from a different array. 183

184 About Alerts PA enables you to activate alerts on components, so that timely notifications can be dispatched to intended recipients when the performance of components rise beyond a particular limit. The performance values of components are verified against threshold limits that you set. PA also maintains history of alerts for components from the time alerts are activated on components. It monitors performance and logs new records whenever the performance of components rise beyond or drop below the set threshold limits. This activity continues till the associated alerts are disabled or the corresponding component records deleted. You can also view performance graphs for components on which alerts are generated. If you are monitoring the ThP pool utilization and have activated alert notifications for the same, you can also forecast the date and time when the utilization exceeds the next threshold limit. NOTE: The alert values are triggered for the values that are last updated from any management station when PA database and PA Management Station separately installed separately using decoupling installation. There are three types of alerts: System alerts: These alerts are configured for primary metrics with predefined threshold values, and are enabled by default. The system alerts helps in identifying the array components that have reached saturation levels and requires immediate attention. You cannot disable these alerts. System alerts are configured and enabled for Frontend CHA Utilization, Cache Write Pending, Backend DKA Utilization, Backend RG Utilization, and Pool Capacity Utilization metrics. The predefined non editable threshold values are: Frontend CHA Utilization: 85%. Cache Write Pending: 60%. Backend RG Utilization: 80% Backend DKA Utilization: 80% Pool Capacity Utilization: 90% MP Blade Utilization: 85% PA default alerts: These alerts are configured and enabled for the metrics with default threshold values. You can edit threshold values, and enable or disable these alerts. User-defined alerts: When you edit the threshold values for PA default alerts, those alerts are called user-defined alerts. You can edit threshold values, and enable or disable these alerts. About configuring alert notifications Alerts are triggered and notifications sent to selected users, when the current performance value of a component crosses the set threshold level, which is also configured as the dispatch at threshold level. PA provides two destinations that you can configure for receiving alerts notifications - destination and SNMP destination. If you have specified a SNMP recipient for alerts, you must provide the IP address or system name of the SNMP server. The server receive the notifications in the form of traps that it forwards to the intended recipient. For more information, see Configure SNMP settings on page 154. If you have specified an recipient address for alerts, PA identifies the difference between entering and exiting the dispatch at threshold level within the text, in the subject line of the . If the alert is entering the dispatch at threshold level, PA identifies the alert as a normal alert and populates the subject 184 About Alerts

185 line of the with XP7 alert. If the alert is exiting the dispatch at threshold level, PA identifies the alert as a recovery alert and populates the subject line of the with XP7 alert - Good Information alert. You can specify a common destination address on the PA Settings screen, which is used for receiving all alert notifications. The destination address is then automatically displayed in Destination in the Settings pane in the Alerts screen. For more information on specifying common recipient addresses for all alert notifications, see Configure alert settings on page 153 and Configure SNMP settings on page 154. Alerts screen details Screen elements Description Filter options Enable Disable Arrays Metric Category Metric Displays those records for which alert is enabled. The system alerts are also displayed as these are enabled by default. For more information on filtering metrics and alerts, see Filter records based on metrics and alert status. Displays those records for which alert is disabled. The list displays only those XP, P9500, and XP7 disk arrays for which alerts are configured. Select the XP, P9500, or XP7 disk array to view the corresponding alert records in the Alerts table. Displays only those categories that are associated with the selected components. By default, all categories are displayed in Configure Alerts pane. Select a metric category from the dropdown list for which you want to view the corresponding alert records. Displays only those metrics that are associated with the selected components. Select the metric to view the corresponding alert records in the alerts table. Configure Alerts pane Enabled Array Resource Threshold Alert Type Settings pane Lists option to enable or disable alerts on components. Displays the selected XP, P9500, or XP7 disk array name. Displays the selected component. For System Alerts and component level alerts, Resource displays Component-ALL, which means that all the configured components for an array are tracked as a part of a single alert. Displays the threshold values for the metrics that are set in the Threshold Setting screen and also the option to enter the threshold value for each component record. Displays the type of alert. SA is displayed for the System Alerts. Table Continued Alerts screen details 185

186 Screen elements No. of Occurrences Destination SNMP Destination Script Destination Script File Description Specify the number of times the metric must cross the set threshold before triggering an alert. Provide a valid destination. By default destination address for receiving the alert notifications is administrator@localhost. To receive a SNMP notification, enter the IP address of the SNMP server that should receive and process the notifications, in the text box under SNMP Destination. Provides the script location. Provide the script file. Set alerts You can configure and activate alerts on components only if you have logged into PA as an Administrator, or a user who is granted administrator privileges. Once the alerts are activated, you can view history for these alerts, which provides data on the following: When the performance of components went beyond or dropped below the set threshold limits Time stamps of alert notifications dispatched to the intended recipients Prerequisites For activating alerts on components, you must first select the components and the corresponding metrics for which the performance must be monitored from the alerts screen or from the threshold setting screen. Then, proceed to configure alerts on those components. It includes specifying the following settings: Threshold and number of occurrences on the components. Alert notification settings that include the notifications and SNMP notifications. In addition to notifications, you can also configure PA to run an XML file or a script when the component's performance value crosses the set threshold. Enable or disable alerts on the components. Procedure 1. Click the HPE XP7 Performance Advisor main menu, and click Alerts. 2. In the Alerts screen, click Create Alert. 3. In the Create Alert screen, select the array, the component type, and then the individual component on which you want to set an alert. 4. Select the associated metrics for which the components must be monitored from the Available Metrics > Choose Metrics Category list. The selected components can belong to an individual XP or an XP7 disk array, or a custom group. You can add alerts at component level that means you can track multiple components for a metric using single record by clicking on the root node. 186 Set alerts

187 5. Click Create. The records are automatically displayed in the Configure Alerts pane. Initially, when alerts are not yet configured on the selected components, the following informational message No alerts are configured for the given filters. appears above the alerts table. NOTE: You can also add alerts by right-clicking the performance or utilization chart of a component and selecting Add Alert(s). 6. In the alerts table, select an alert record and configure threshold, number of occurrences, alert notification settings, and enable alerts on the components. Configure and manage alerts 187

188 For a new component record, the following default values are displayed in the alerts table: Selected XP or XP7 disk array name under Array Selected component under Resource Selected metric category under Metric Category Selected metric under Metric under Threshold The destination and SNMP addresses configured on the PA Settings screen. If not configured, the Destination and SNMP Destination fields are shown blank. After you enable alerts on components, PA does the following: 1. Collects the latest performance values of components in every collection frequency cycle and compares them with the set threshold levels. 2. PA dispatches the appropriate alert notifications, only when the performance value of a component has exceeded threshold for the performance cycles set as occurrence. For example, while configuring the alert if the user sets occurrence value as 2 for a component, PA dispatches a notification only when the performance value of the metric remains above threshold for two consecutive performance cycles. But any drop in performance value below the threshold post a serious alert will be immediately notified to the user. PA also displays the appropriate time for the above-mentioned events under Time Posted, Time Updated, and Time Dispatched in the alert History table. Logs a record for those components in the Alert History table. PA starts logging records from the subsequent data collection cycle when it starts monitoring the selected components. NOTE: If you configure alerts at the host group level, and then edit a host in any port, the alert notification is sent only for the common host group that is configured. The notifications are not sent for the edited host group. To receive notifications for the edited host group, re-configure the alerts. If you change the name of a host group that has alerts configured, delete and re-configure all alerts for that host group. Enable or disable alerts By default, PA monitors only those components for which alerts are enabled and sends appropriate notifications to intended recipients when required. Though threshold and dispatch settings are configured on components, they are not monitored until you enable alerts on those components. You must manually activate or enable an alert on a component for PA to start monitoring the selected component and send notifications. 1. Click the HPE XP7 Performance Advisor main menu, and click Alerts. 2. In the Alerts table in the Configure Alerts pane, select the component records for which you want to specify the threshold level. You can also filter component records in the Alerts table. 188 Enable or disable alerts

189 To enable alerts on components, select the Enable check box. By default, the current state for a newly added component record appears as Disabled in the Alerts table. Once the alert record is enabled, it implies that PA will not monitor the selected component. To disable alerts on components, select the Disabled check box. Once the alert record is disabled, it implies that PA will not monitor the selected component. To configure notification and monitoring settings across component records, use the Shift for sequential selection of records and Ctrl for random selection of records. Filter records based on metrics and alert status These filters are enabled only when you add alert records for a component in the Alert table. By default, all the alert records configured on the selected XP and XP7 disk arrays, and component are displayed in the Alerts table. See the Alerts screen details for description of the filter options in the Alerts screen. Example 1 Filtering records If you have filtered records in the Alerts table for RAID Groups, 1 3 and 1 5. Their associated metrics are RAID Group Total IO Frontend, RAID Group Total MB Frontend, and RAID Group Sequential Read Tracks Backend. The Metrics list displays RAID Group Total IO Frontend, RAID Group Total MB Frontend, and RAID Group Sequential Read Tracks Backend metrics. The following are the combination of records that are displayed in the Alerts table: RAID Group 1 3 and RAID Group Total IO Frontend metric RAID Group 1 3 and RAID Group Total MB Frontend metric RAID Group 1 3 and RAID Group Sequential Read Tracks Backend metric RAID Group 1 5 and RAID Group Total IO Frontend metric RAID Group 1 5 and RAID Group Total MB Frontend metric RAID Group 1 5 and RAID Group Sequential Read Tracks Backend metric If you want to configure alert settings only on RAID Group, 1 3 for the metric, RAID Group Total IO Frontend, select the metric as RAID Group Total IO Frontend from the Metrics list and Passive from the Alerts Status list. The set of RAID Group records are further filtered to display only RAID Group, 1 3 for the RAID Group Total IO Frontend metric and Passive alert status. Click Clear Filter any time while selecting values from the filter options. It removes the current selection and displays all the records in the Alerts table. Set alert notifications Prerequisites A valid source address, and IP and port addresses of the SMTP servers are specified. For more information, see About PA Settings. PA uses the specified SMTP server details to dispatch notifications to the intended recipients. Specify a community name (Public or Private) for the source SNMP server. By default, Public is used as the community name. Filter records based on metrics and alert status 189

190 You can also specify a common subject line for all the alert notifications, an appropriate title for the Good Information alert notifications, and community name (Public or Private) for the source SNMP server. For more information on setting the above-mentioned parameters, see Configure SNMP settings on page 154. IMPORTANT: By default, PA dispatches Good Information alert notifications. However, if it is disabled, you must enable the Good Info Alert Flag check box on the Settings screen to receive the XP7 Alert - Good Information Alert notifications. For more information, see Configure alert settings on page 153. Procedure 1. Click the HPE XP7 Performance Advisor main menu, and click Alerts. 2. In the Configure Alerts pane, click on a metric for which you want to set an alert notification. You can also filter component records in the Configure Alerts pane. To receive an notification, type the address in the text box under Destination. By default, notifications are sent to administrator@localhost, which is the common destination address for all alert notifications. This address is valid till: You specify a different destination address on the PA Settings screen. The alert notifications generated after this change are redirected to the new destination address. For more information, see Configure alert settings on page 153. You specify a different destination address in the Destination box. The new address is applicable only for the set of records that you selected in the Alerts table. 3. To receive a SNMP notification, enter the IP address of the SNMP server that should receive and process the notifications, in the text box under SNMP Destination. The changes are updated in the PA database and accordingly reflected in the Destination box and the SNMP Destination box for the selected component records. Sample notification for an P9500 Disk Array Sample SNMP notification for an P9000 Disk Array 190 Configure and manage alerts

191 Establish scripts for alerts Procedure 1. In addition to configuring and SNMP destinations for receiving alert notifications, you can also configure script or batch file execution when an alert is triggered. To provide the path for executing scripts, click the HPE XP7 Performance Advisor main menu, and click Alerts. 2. In the Configure Alerts pane in the Alerts screen, select the component records for which you want to specify the threshold level. You can also filter component records in the Alerts table. 3. Provide the script location in the text box under Script Destination. PA automatically executes the script when the performance of a component crosses the set threshold level. The output of the.bat file provided will be present in the absolute path location (example: C:\Users \Administrator folder or system 32 folder), as it is platform dependant. Therefore, ensure that you provide the absolute path while creating the.bat file. Sample script file The following is an example of a script file: C:/Temp/a.xml. The format of the XML file should be as follows: <?xml version= 1.0 encoding="iso "?> <!-- A sample XML file describes the script to be executed --> <Service> <Method>Run Script</Method> <!-- Enter the full path name of your script file --> <Full-Script-Path>C:\Temp\a.bat</Full-Script-Path> </Service> Establish scripts for alerts 191

192 Delete alert records Procedure 1. Click the HPE XP7 Performance Advisor main menu, and click Alerts. 2. In the Alerts table in the Configure Alerts pane, select a metric for which you want to delete the alert that you have set. 3. Click Delete Alert. The records are permanently removed from the Alerts table. Once the alert is deleted, the action is updated in the Event Log screen. Alert History About Alert History PA maintains the history of alerts for components, if the alerts are already configured and enabled on them. Initially, the message No records found matching the given filter criteria is displayed if there are no component records posted in the alert History table. To view the alerts History table, navigate to the PA main menu, and then click Alert History. Understanding alert history For PA to start monitoring the performance of a component and generate an alert, you must configure the required threshold and dispatch settings on the Alerts screen, and also enable an alert for that component. For more information on configuring alerts, see About Alerts. After an alert is configured, PA monitors the performance of a component from the next data collection cycle. A new record is displayed on the Alert History screen with the time of posting appearing under Time Posted. IMPORTANT: PA posts the record only if data collection is in progress for the XP/XP7 disk array to which the component belongs. While configuring an alert, if you set threshold and dispatch settings but do not enable the alert for a component, PA does not monitor that component and generate an alert when required. In every data collection cycle, PA retrieves and compares the current performance value of a component with the set threshold value. The time when this value was retrieved and compared is shown under Time Updated. If the current performance value exceeds the set threshold value, PA does the following: 1. Posts a new record and displays the time of posting under Time Posted 2. Dispatches an alert notification of type, XP7 Alert to the intended recipient 3. Displays the time of dispatch under Time Dispatched 4. Monitors the component till its performance value drops below the set threshold value 5. Updates the time of monitoring under Time Updated 192 Delete alert records

193 IMPORTANT: The time displayed under Time Updated is in sync with the data collection cycle frequency. In case of decoupling setup: The triggered alerts display the host name of the server where the database is installed. The time posted for the triggered alerts is based on the server time on which the database is installed. If the performance value of a component drops below the set threshold value, PA does the following: 1. Posts a new record and displays the time of posting under Time Posted 2. Dispatches an alert notification of type, XP7 Alert Good Information alert to the intended recipient 3. Displays the time of dispatch under Time Dispatched 4. Monitors the component continuously to verify whether its performance is within or beyond the set threshold level Alert History screen details The following are the column headings under which alerts history records are displayed. Alert History screen details 193

194 Table 15: Alerts History Filters Screen elements Metric Error Status Description Displays a list of metrics for which components are selected and alerts configured on them. If you have used the first level of filters, the Metric list displays only those metrics for which alerts are created on the selected components. In addition, the All option lists all the alerts history records that are created on the different components in the selected XP or XP7 disk array. Displays a list of types of error such as: errors SNMP errors Script errors All errors No errors Select one of the above-mentioned error types to filter records and view the status of the respective alert and SNMP notifications, and script executions. If you select errors, SNMP errors, Script errors, or All errors, PA returns anything that is non-zero for these selections. If you select No errors, PA displays only zero items, that is the alerts that were successfully dispatched. Arrays Displays the disk arrays for which alerts are generated. Table Continued 194 Configure and manage alerts

195 Screen elements Time Stamp Description This list displays the following options: Time posted (default selection): If this option is selected, the time stamps of when the records are posted on the Alert History screen are displayed. A record for a component is first posted on the Alert History screen when the following conditions are met: Alert is enabled on the component. Performance data collection is in progress. PA pings the component in the next data collection cycle to receive its current performance value, and also posts a new record on the Alert History screen. PA again posts a new record for the same component and displays the new time of posting under Time Posted when one of the following conditions is met: The alert is disabled or there are no I/Os transactions on the component. The performance of a component rises or drops below the set threshold level. Time updated: If this option is selected, the time stamps when PA last collected the latest performance values for all the components are displayed. Time dispatched: If this option is selected, the time stamps when PA dispatched the alert notifications are displayed. If a record is showing a blank entry for any of these time stamps, that particular record is skipped during the filtering phase. Assuming that the I/Os transactions are not happening on a particular component, and the alert is also disabled. In such a case, the Time Updated displays a blank entry for that component record. Hence, the record is skipped when you filter based on Time Updated option. Table Continued Configure and manage alerts 195

196 Screen elements Alert Type Description This list displays the following options: All: This option is for viewing both the serious and the recovery alerts. Recovery Alert: This option is for viewing records that are logged for alert notifications dispatched after the performance of a component dropped below the set threshold limit. Serious Alert: This option is for viewing records that are logged for alert notifications dispatched when the performance of a component rises beyond the set threshold limit. An alert notification is dispatched only the first time when the performance of a component goes beyond the set threshold limits. Start Time and End Time From the respective calendars, select the start and end time range for filtering the component records. Table 16: Viewing Allert History records Screen elements Alert State DKC/Grp (Array Name) Array Type Metric Resource Description Displays the current state of an alert: Recovery Alert or Serious Alert. Displays the array model to which the selected component belongs. Displays the array type to which the selected array model belongs. Displays the metric for which a component is monitored. When you select the All option in the Metrics list, the alert records configured on the selected component are displayed in the Alerts table. Displays the component that is monitored for a particular metric and metric category. NOTE: The Resource is displayed as PVOL: LDEVID (serial number) and SVOL: LDEVID (serial number) for the Pair Status alerts. Value Displays the current performance value that is recorded for a component. Table Continued 196 Configure and manage alerts

197 Screen elements Dispatch Threshold Time Posted Time Updated Time Dispatched Status Description Displays the threshold value that you set for the component. PA triggers an alert if the performance value of a component rises or drops below the set threshold value. Displays the time when a record was first displayed for a component on the Alert History screen. The Time Posted displays a new time stamp again when PA creates a record for the same component after dispatching the appropriate alert notification. Displays the time stamp when PA updates the current performance value of a component under Value. The Time Updated does not display any time stamp if the alert configured for a component is deleted or alert is disabled on the Alert Configurations screen. Displays the time stamp when the alert notification is dispatched to the intended recipient. Displays the status on and SNMP notifications, and script execution. The five possible statuses are listed as follows: Status 0: Timed Out : in case alert cannot be triggred in the given time(this time is specified in the _TimeOut field of the serverparameters.properties file) Status 1: InCorrect SNMP Setting : in case SNMP address is invalid Status2: SNMP Protocol Error: In case there is problem in sending mail to SNMP server Status 3: Failed to dispatch : Runtime problem like network connectivity etc Status 4: Successful NOTE: In case , SMTP or Script is not configured, then the status is displayed as NA. Filter records in Alerts History table Procedure 1. From the HPE XP7 Performance Advisor main menu, click Alert History. 2. Filter and view component records based on the options described in the Table 15: Alerts History Filters on page 194 table. 3. Click Filter. PA filters the existing set of records and displays only those that match the selection criteria on the Alert History screen. The records are displayed in an ascending order. Filter records in Alerts History table 197

198 Click Clear Filter any time while selecting values from the filter options. It removes the current selection and display all the records in the Alerts History table. 198 Configure and manage alerts

199 Manage events About Event Log PA generates events in response to various activities that you perform using this application. Appropriate records are automatically displayed for all the events in the Event Log screen. For instance, records are logged for events generated when a performance data collection fails or the collection schedule is restarted. IMPORTANT: By default, the Event Log screen displays records for events that have been generated in the last 24 hours. The records logged contain the following details for an event: Time when the event was logged. Type of event logged. Severity of the event. Description. In addition, view the following details on the Event log screen: Historic data (data older than 24 hours) by specifying a date range for viewing the data. Filter the event records based on severity and type of events generated. Event Log screen details Screen element Time Type Severity Description Actions menu Description Displays the time when the event was logged. Displays the type of event logged. Displays the severity of the event. Displays event description. Displays the option for Advanced Search, Delete, and Refresh. View event logs From the HPE XP7 Performance Advisor main menu, select Event Log. Click a column heading to sort the records based on that column. By default, columns are sorted in ascending order. Click the column heading again to reverse the sort order. To refresh the Event Log page, from the Actions menu, click Refresh. Manage events 199

200 Filter event records For search based on text entries 1. From the HPE XP7 Performance Advisor main menu, click Event Log. By default, records for events logged in the last 24 hours are displayed. 2. Enter text in the Search Text box based to filter the event records. You can search based only on the description column using this option. 3. Click Search. The event records are filtered and only those records that have the matching text are displayed on the Event Log screen. For search based on duration, type, or severity of events logged 1. From the Actions menu, select Advanced Search so that the following Event Log filters are enabled: Start Time and End Time date and time filters Type list Severity list You can search based on one or a combination of the above-mentioned parameters. 2. In the Advanced Search page, select the duration (start and end date and time) from the Start Time and End Time filters. 3. Select one of the following event types from the Type list. By default, the records for all types of events are displayed. Database Host Configuration Data Collection Alert Configuration License Reports Register SVP Export LDEV to csv (events generated while exporting LDEV data into.csv file) 4. Select one of the following severity level from the Severity list. By default, the event records for all levels of severity are displayed: 200 Filter event records

201 Severity level User Action System Error Critical Error Description Errors for user-instigated activities, like if the user deletes a performance data collection schedule. Exception errors given by PA. Critical errors, where PA may not function. Though you would have already set the severity level for event logging, this filter also displays the severity levels applicable to all events logged before you set the severity level. It is useful in cases where you want to view events generated prior to setting the severity level. 5. Click OK to filter the records. To remove the filtered records in the Event Log page, from the Actions menu, click Refresh. Delete event records Procedure 1. From the HPE XP7 Performance Advisor main menu, click Event Log. By default, records for events logged in the last 24 hours are displayed. 2. Select one or more records that you want to delete. 3. From the Actions menu, select Delete. 4. In the Delete Confirmation page, click OK. Delete event records 201

202 View disk array components About Summary view PA provides the overall configuration, performance, of the components of the XP, P9500 and the XP7 disk arrays on the Summary View main screen. The data is displayed from the last performance data collection time stamp and includes the following: The configuration and component distribution summary. The performance summary, which includes the average performance of the frontend and the backend components, Cache, and CLPR for an XP disk array. Historical graphs for the metrics listed in the tabular format. To plot charts for a metric in Summary View, click on the hyperlink of a metric. In addition to the above-mentioned, the continuous access data and average utilization of each MP blade is also displayed for an P9500/XP7 disk array. NOTE: The CHIPs and ACPs are applicable only for the XP20000 Disk Arrays. They are replaced by the CHAs and the DKAs for the XP24000 Disk Array, and P9500 disk arrays. IMPORTANT: Ensure that at least one round of configuration and performance data collection is completed for the selected XP/XP7 disk array to view the respective array and component details. Figure 4: Summary View screen The above figure displays the Summary View screen for 10035, which belongs to the XP7 disk array. 202 View disk array components

203 Further, to view the performance and utilization metrics at the component level in the disk array, click the Component drop-down list for the disk array and select the component from the list. Click each component under a particular component node to view the individual performance or utilization data. For example, clicking Ports for an XP7 disk array displays the performance summary of all the ports configured on the disk array at a particular time. IMPORTANT: When you select the components, such as the MP Blades, or the Pools, only the installed MP blades, or the configured ThP and smart pools are displayed in the respective lists. Plot summary view on Chart Work Area PA enables you to plot the summary data of components displayed in the tabular columns on a chart. You can plot charts of components and its metrics in the respective summary screens without navigating to individual component screens. The data is refreshed every one minute. Once a chart is plotted, it persists in the Chart Work Area even when you navigate to a different summary screen. Right-click the charts to save or to send to an . When you select a component record from the summary screen of a component type, the Chart Work Area pane appears. To plot chart for a specific metric for the selected component, click the metric value displayed in the table. The chart for the metric is plotted in the chart work area. For example, in the Port Summary screen, select the component CLI-A, and then click the Maximum IOPS metric for which you want to plot a chart. The Maximum IO is plotted on the Frontend IO Metrics chart in the chart work area. To plot chart for the frontend MB metrics, select one or all the MB metrics, as required. The Frontend MB chart is appended to the chart work area, and the values for the Frontend MB metrics are plotted in the chart. View Array Summary The summary of all components for the selected XP/XP7 disk array is displayed in the Summary View drop-down option. Initially, before you begin configuration collection, only the information related to the XP/XP7 volumes presented to the host are displayed on the Summary View screen. However, when you collect the configuration and the performance data, the relevant information is displayed under the Summary View tab, as shown in the following image: Plot summary view on Chart Work Area 203

204 The XP/XP7 disk array summary includes the following: Screen elements Configure Information Description Model: The model number of the XP/XP7 disk array. Micro Code: The array firmware version of the XP/XP7 disk array. RMLIB: The version installed on the host machine. Volume Information The Volume Information displays the summary of all the components for the selected XP/XP7 disk array. A list of components and their numbers are displayed. Initially, N/A is displayed beside each component as the configuration collection has not yet been initiated. Volume information screen details The following table provides a summary of all the components for the selected XP/XP7 disk array. Physical LDEVs Ports The total number of LDEVs created from RAID Groups. The total number of Ports available from the installed CHAs. Table Continued 204 View disk array components

205 LUNs LUSE RAID Groups External VOLs Continuous Access JNL Vols THP VVOLs Smart VVols The number of LDEVs that have one or more associated paths (host connectivity). This is an aggregate of the following: Physical LDEVs with paths plus the virtual volumes with paths plus the total number of LDEVs in Logical Unit Size Expansion (LUSE) that has an associated path. The LUSE feature is available when the HPE XP7 LUN Manager product is installed, and allows a LUN, normally associated with only a single LDEV, to be associated with 1 to 36 LDEVs. Essentially, LUSE makes it possible for applications to access a single large pool of storage. The total number of LDEVs configured in a LUSE. Total number of RAID Groups defined on the XP/XP7 disk array. The total number of external volumes associated with an XP or an XP7 disk array. The total number of physical LDEVs configured as the continuous access journal volumes. The total number of ThP virtual volumes defined on an XP or an XP7 disk array. The total number of physical LDEVs configured as pool volumes. NOTE: Displayed only for the XP7 disk arrays. SnapShot VVOLs Raw Capacity The total number of distinct snapshot VVols that are associated with one or more host port(s). The total installed capacity of an XP or an XP7 disk array. It does not refer to the array usable capacity. For each RAID Group, the raw capacity is calculated as follows: (number of disks that belong to a particular RAID type * size of the disks). View Array Performance The Array Performance screen provides the overall array performance by measuring the total I/Os, read and write I/Os on that array. The Array Performance screen comprises of panes for the following: Frontend Total Avg Backend Total Avg Bus/Path Util % Cache CHIP Port Activity Ave CLPR Details ACP Pair Backend MP Blades Util % View Array Performance 205

206 View Port summary To view the summary of overall utilization of ports for an XP or an XP7 disk array, from the PA main menu, navigate to Summary View > Ports. This feature totals the LDEV I/Os, LDEV MB/s, cache fast write, disk fast write, cache bypass, and backend transfer values for all of the LDEVs on a given port. IMPORTANT: When you request a port summary report, the total I/Os displayed may not be equal to the sum of the I/Os across each of the ports. This can occur if multiple paths to an LDEV exist. The port IO summary indicates the IO ceiling values across the ports. It does not indicate the absolute or accurate I/O rates across the ports. Historical graphs are plotted for the metrics listed in the tabular format. To plot charts for a metric in Summary View, click on the hyperlink of a metric. The chart work area appears with the historical data plotted. Right-click on the individual chart to perform the following actions: Real Time, Trend, Forecast, Save charts as PDF/CSV, Save chart as templates, and charts. 206 View Port summary

207 Screen elements Port Name SLPR Description Displays the port name for the channel processor (CHP) port. CHP is the processors located on the CHA. Synonymous with channel host interface processor (CHIP). Provides the option to view information associated with a particular port or with all ports. Channel adapter (CHA) is a device that provides the interface between the array and the external host system. Displays the SLPR with which the RAID Group is associated. NOTE: SLPR does not exist in the XP7 disk arrays. So, the SLPR-related data is displayed only for the XP disk arrays. SLPR Name Displays the SLPR group ID. NOTE: SLPR does not exist in the P9500/XP7 disk arrays. So, the SLPR group ID is displayed only for the XP disk arrays. Port Type E-seq(s) Max IO/s Avg IO/s Min IO/s Max MB/s Min MB/s Avg MB/s Displays the port type, such as iscsi, FCoE (applicable for P9500/XP7 disk arrays) for the port ID. Displays the Ext-Lun provider's serial number for the array. Displays the maximum frontend I/Os on the port. Displays the average of the total frontend I/Os. Displays the minimum frontend I/Os on the port. Displays the maximum frontend throughput in MB/s. Displays the minimum frontend throughput in MB/s. Displays the average frontend throughput in MB/s. View RAID Group summary To view the RAID Group summary view from the main menu, navigate to Summary View > Raid Groups. This feature totals the LDEV I/Os, LDEV MB/s values for all the LDEVs on a given RAID Group. In addition, it also displays the percentage of the RAID Group utilization random read, random write, random write parity, sequential read, sequential write, sequential write parity, and the overall RAID Group percentage utilization (sum of the above percentages) on a given RAID Group. The RAID Group utilization percentage is not displayed for external storage volumes. View RAID Group summary 207

208 Figure 5: RAID Group summary Screen elements RG SLPR Description The RAID Group to which the LDEV belongs. The SLPR with which the RAID Group is associated. NOTE: SLPR does not exist in the P9500/XP7 disk arrays. So, the SLPR-related data is displayed only for the XP disk arrays. SLPR name The SLPR group ID. NOTE: SLPR does not exist in the P9500/XP7 disk arrays. So, the SLPR group ID is displayed only for the XP disk arrays. CLPR CLPR name LDEV IOPS LDEV MBPS Backend Transfer The CLPR with which the RAID Group is associated. The CLPR group ID. The total frontend I/Os for all random reads, random writes, sequential reads, and sequential writes during the reporting period. The total frontend throughput in MB/s for the LDEV. The total number of backend tracks transferred to or from the XP array backend. Table Continued 208 View disk array components

209 Screen elements Combined Backend Transfer Description * is displayed beside the combined backend transfer value indicates one of the following: If any of the physical LDEVs from a RAID Group is configured in multiple ThP pools, the sum of the backend transfer on all the ThP pools will be shown as combined backend transfer for that RAID Group. (The backend transfer of each ThP pool is the sum of backend transfer on V-Vols belonging to that ThP pool). Virtual volume (V-Vol) is the secondary volume in a Snapshot pair. When in PAIR status, the V-VOL is an up-todate virtual copy of the primary volume (P-VOL). When in SPLIT status, the V-VOL points to data in the P-VOL and to replaced data in the pool, maintaining the point-in-time copy of the P-VOL at the time of the split operation. If physical LDEVs from multiple RAID Groups are configured in a ThP pool, the combined backend transfer will be reported as an aggregate value for all the RAID Groups. % RGUtil Random Read The random read utilization percentage for a RAID Group. % RGUtil Random Write The random write utilization percentage for a RAID Group. % RGUtil Random Write Party % RGUtil Random Write Parity The random write parity utilization percentage for a RAID Group. % RGUtil Sequential Read The sequential read utilization percentage for a RAID Group. % RGUtil Sequential Write The sequential write utilization percentage for a RAID Group. % RGUtil Sequential Write Party The sequential write parity utilization percentage for a RAID Group. Overall % RG utilization The overall percentage utilization of a RAID Group, which is the sum of the random reads, random writes, random write parity, sequential reads, sequential writes, and the sequential write parity. View Top10 Frontend IO The Top 10 Frontend IO summary provides details of the ten busiest LDEVs and ports associated with an XP or an XP7 disk array's frontend activities. NOTE: If the number of busiest LDEVs or ports are less than ten or if their utilization is zero, only the busiest components are displayed. The 10 busiest LDEVs selected is based on the I/Os and 10 busiest ports selected is based on the average I/Os. The LDEV response time metrics, MAX READ RESP and MAX WRITE RESP (msec) are measured as the maximum response time over the last 30 seconds of the collection interval.for example, if your collection interval for RAID Group is set to 5 minutes, the MAX value is calculated over the last 30 seconds of the 5 minute collection interval. The AVG READ RESP and AVG WRITE RESP (msec) are measured as the average response time calculated over the entire collection period. For example, if RAID View Top10 Frontend IO 209

210 Group collection interval is set to 5 minutes, the Average Response Time is calculated over the entire 5 minutes collection period. Click an LDEV ID or port ID to view the performance graphs for all the associated metrics in a chart window. To know more about charts, see Plotting charts. IMPORTANT: The response time is calculated from the time the I/Os are received by the CHA port till the time they are dispatched from the CHA port. If the LDEV is a LUSE Master, the details of individual LDEVs are considered for the busiest components and not the sum of all the individual LDEVs. The Maximum Port IO is the maximum of the last collection time stamp. For example, if port I/O collection interval is set to two minutes, the Maximum Port I/O will be calculated as the maximum value over the two minute collection period. The port type, such as FCoE (applicable for P9500/XP7 disk arrays) is also displayed for the respective port ID. Figure 6: 10 Busiest Frontend LDEVs 210 View disk array components

211 Figure 7: 10 Busiest Frontend Ports View Top 10 Backend IO To view the 10 busiest LDEVs and RAID Groups associated with an XP or an XP7 disk array's backend activities, from the main menu, navigate to Top 10 Backend IO The top 10 busiest LDEVs are displayed under the LDEV tab and the top 10 busiest RAID Groups are displayed under the Raidgroups tab. If the number of busiest LDEVs or RAID Groups are less than ten or if their utilization is zero, only the busiest components are displayed. The 10 busiest LDEVs selected is based on the backend transfer rate and the 10 busiest RAID Groups selected is based on the Overall % RAID Group Utilization. IMPORTANT: If the LDEV is a LUSE Master, the details of individual LDEVs are considered for the busiest components and not the sum of all the individual LDEVs. The LDEV response time components, AVERAGE READ RESPONSE, MAXIMUM READ RESPONSE, AVERAGE WRITE RESPONSE, and MAXIMUM WRITE RESPONSE, are measured in milliseconds. View Top 10 Backend IO 211

212 Figure 8: Top 10 Busiest Backend LDEVs Figure 9: Top 10 Busiest Backend RAID Groups 212 View disk array components

213 View MP Blade utilization summary for XP7 disk arrays NOTE: MP Blade Utilization summary is not applicable for XP24000 disk array in PA 7.2 version. The table below provides the details that are displayed for an MP Blade on the MP Blades screen: View MP Blade utilization summary for XP7 disk arrays 213

214 Table 17: MP Blade Utilization Summary MP Blade screen elements MP Blades Description Displays the following details: Selected MP Blade: The selected MP Blade ID. Cluster number: Displays the cluster number. Each MP Blade ID includes the corresponding cluster # and the Blade location. For example: MPB-1MA is the MP Blade ID, 1 indicates the cluster #, and MA indicates the Blade location. Avg. Util %: The average utilization of an MP Blade by all the associated processing types. The average utilization is calculated as the utilization of all the individual processors in the MP Blade, MPB-1MB, which is as follows: (MP1+MP2+MP3+MP4)/4. The average utilization by each processing type is due to its consumers that are using the CPU cycles. For example, if 70% is the average MP Blade utilization and there are five processing types, it indicates that on an average, all the CPU cycles are utilized up to 70%. In the 70%, if 25% constitutes the average MP Blade utilization by the Backend processing type, it indicates that 25% of the CPU cycles are utilized for processing the array backend activities. Number of LDEVs, No. of Ext. Vols, No. of Cont. Access Jnl Groups: The number of consumers for the MP Blade, which can be LDEVs, external volumes, and the continuous access journal groups. The total of the above-mentioned constitutes the total number of consumers for the selected MP Blade. Processors Displays the following details: MP Processor: The MP processor utilization data. Avg Util(%): The average utilization of MP Blade. IO Buffer Count: The outstanding I/Os to be processed in the MP queue. Table Continued 214 View disk array components

215 MP Blade screen elements Processing Distribution Description Displays the following details for the selected MP Blade component: Processing Type: The list of processing types. Avg. Util%: The average MP Blade utilization by each processing type. The average utilization is calculated as the utilization of all the individual processors in the MP Blade. Top Components Displays the following details about the top 20 consumers for the selected MP Blade: Component: The ID of the consumer that is assigned to the MP Blade. Component Type: The type of consumer (LDEV, journal volume, E- LUN). Processing Type: The processing type that is utilizing the selected MP Blade to process consumer requests. Avg Util (%): The average MP Blade utilization by the consumer.the top 20 count is derived based on each consumer's average utilization of the CPU cycles achieved through the associated processing type. In addition, you can view the performance graphs in the MP Blades component screen. View pools summary for P9500/XP7 disk arrays PA provides the current configuration and performance data for the Smart pools and the ThP pools. Smart pools contain multiple storage tiers that can be categorized as the upper and lower tiers. The upper tiers are created from the SSD drives and used for storing frequently accessed data. The lower tiers can be created from the SAS or the SATA drives and used for storing less accessed data. The Smart Tiers are applicable only for the P9500/XP7 disk arrays and configured using the ThP V-Vols. They ensure that the highly utilized ThP pages are relocated to the fastest drive in the ThP pool. A maximum of three tiers can be configured. To view data about the different storage tiers and the RAID Group utilization for a Smart or ThP pool in an P9500/XP7 disk array, click Pools for the disk array in the component selection tree under Array View in the left pane. You can click a particular record in the V-vol Settings field, Pool Volume field, and Pool Tiers field to highlight the record, and then click Plot Chart to choose the metrics and to view the respective performance graphs You can also click a particular record to highlight the record, and then click Plot Chart to choose the metrics and to view the respective performance graphs. IMPORTANT: The data on the Smart pools and the ThP pools are not displayed if the pools are not configured in the selected P9500 and XP7 disk array. The following error message is displayed: Smart and ThP pools are not configured for this P9500/XP7 Disk Array. View pools summary for P9500/XP7 disk arrays 215

216 Table 18: Pool Information screen details Screen element Pool Information Description Displays the configuration and performance data of the Smart and the ThP pools. The data includes the following: Pool ID, type, and the pool status. I/O per second data, MB/second data throughput, and Backend transfers between the cache and the drives. Savings and compression ratio for FMD Gen2 capacity. For more information, see Pool Information table. Pool <Pool ID> Details Based on whether you select a Smart pool or a ThP pool, the following details are displayed: The performance data of the VVols for the Smart pool or the ThP pool. The performance data includes the I/O per second data, MB/ second data throughput, Backend transfers, and the average read and write response time values of the VVols. The utilization data for the respective RAID Groups and the pool LDEVs. In addition, the following details are displayed only for a Smart pool: The total and the used capacity The capacity threshold value For more information, see Pool VVOl Details table. View configuration and performance data for Smart pools and ThP pools The Pool Information table displays the following configuration and performance data for all the Smart pools and the ThP pools configured in the selected P9500/XP7 disk array. Table 19: Pool Information table Column names Pool ID Pool Type Description Displays the Smart pool and the ThP pool IDs. Displays the pool type as either Smart or ThP for the pool ID. NOTE: Real time tier enabled Smart pool is displayed as Smart (Real time tier). Table Continued 216 View disk array components

217 Column names Pool Status Description Displays the current status of the Smart pool or the ThP pool. Following are the statuses and their descriptions: Normal: Indicates that the Smart or the ThP pool is functioning properly. Over threshold: Indicates that the Smart or the ThP pool has crossed the threshold capacity that you set on the P9500/XP7 disk array. Blocked: Indicates that the Smart or the ThP pool has reached 100% utilization and has gone into the suspended state. There is no more storage space left. Failure: Indicates that the Smart or the ThP pool is in a failed state. The VVols performance data, the respective RAID Groups, and the pool LDEVs utilization data is not displayed for such pools. IOPS MBPS Backend Tracks Max Read Response Time Max Write Response Time Average Read Response Time Average Write Response Time Displays the sum of the random and sequential read and write I/Os on the individual Smart pool or the ThP pool. Displays the sum of the random and sequential reads and writes in MB/s on the individual Smart pool or a ThP pool. Displays the total Backend tracks associated with the Smart pool or the ThP pool. It is an aggregate of all the Backend transfers due to I/Os occurring on every VVol in the Smart pool or the ThP pool. Displays the maximum of Max Read Response time of the Pool Virtual Volumes. Displays the maximum of Max Write Response time of the Pool Virtual Volumes. Displays the average of Average Read Response time of the Pool Virtual Volumes. Displays the average of Average Write Response time of the Pool Virtual Volumes. Table Continued View disk array components 217

218 Column names Tier Relocation Progress Rate Description Displays the rate at which the relocation is performed at the end of each monitoring cycle. The value displayed is in the range of 0 to 100. Displays 0, if no relocation is performed. Displays 100, if the relocation is complete. Displays any value between 0 to 100, if the relocation is in progress or incomplete. NOTE: The Tier Relocation Progress Rate is applicable only for Smart pools. Total Physical FMD Gen2 Capacity (GB) Used Physical FMD Gen2 Capacity (GB) Physical FMD Gen2 Usage Rate (%) Total Logical FMD Gen2 Capacity (GB) Used Logical FMD Gen2 Capacity (GB) Saving (GB) Saving (%) Compression Ratio Displays the maximum value of physical usable FMD Gen2 capacity in a pool. Displays FMD Gen2 physical capacity used in the entire pool. Displays the usage rate (%) for FMD Gen2 physical capacity in a pool. Displays the maximum value of logical usable FMD Gen2 capacity in a pool. Displays FMD Gen2 logical capacity used in the entire pool. Displays the savings (GBs) through compression. Displays the savings (%) through compression. Displays the ratio of compression. View Monitoring Information Click a pool in the pool information pane to see the monitoring details of that pool. Table 20: Smart Pool monitoring information screen elements Screen elements Last available start time Last available end time Relocation Type Monitoring Mode Description Displays last available monitoring cycle start time with PADB. Displays last available monitoring cycle end time with PADB. Displays type of tier relocation. For example, Auto or Manual. Displays mode of monitoring. For example, Period or Continuous. Table Continued 218 View disk array components

219 Screen elements Monitoring Status Frequency Description Displays status of a monitoring cycle. For example, Monitoring or Stop. If No-Status is displayed, then issue outband configuration collection to get the current status. Displays the duration of a monitoring cycle. View VVols data for Smart pools and ThP pools The following data on the associated pool volumes and VVols is displayed in the Top 20 Pool V-Volumes pane table for the selected Smart pool or the ThP pool. Table 21: Pool VVOl Details table Screen element Vvol Vvol IOPS (metric) Vvol MBPS (metric) Vvol Backend Tracks (metric) Vvol Avg Read/Write Response Time (metric) Vvol Tier Capacity distribution (for Smart Pool only) Description The VVols attached to the Smart pool or the ThP pool. Displays the sum of the random and sequential read and write I/Os that are handled by the VVol. Displays the sum of the random and sequential reads and writes in MB/s that are handled by the VVol. Displays the total Backend tracks associated with the VVol. It is an aggregate of all the Backend transfers due to I/Os occurring on the VVol in the Smart pool or the ThP pool. Displays the average read and write response value of the VVol. Vvol Tier Capacity Distribution is the distribution of Vvol capacity used, across the Smart Pool tiers. View pool volumes Table 22: Pool Volumes Table Column names RG RG Total Util % RG Level Description Displays the RAID Groups that contribute to the Smart pools or the ThP pools. Displays the total utilization of all the physical LDEVs and the pool LDEVs in the RAID Group. Displays RG level of a particular RAID Group. For External RAID Group this column is blank. Table Continued View disk array components 219

220 Column names Disk Type Pool LDEVs Description Displays disk type of a particular RAID Group. For External RAID Group this column is blank. Displays the individual pool volumes from the RAID Group that are included in the Smart pool or the ThP pool. The maximum number of VVol records you want to view. NOTE: HPE recommends viewing a maximum of 150 records at a time, so that there is no performance impact. The metrics based on which you want to sort the records. You can sort records based on the IOPS, MBPS, Backend Tracks, and the Avg Read/Write Resp Time metrics. By default, the VVol records are sorted based on the Avg Read/Write Resp Time values. To configure the above-mentioned settings, click V-vols Settings in the Pool <Pool ID> Details table. An informational message on the VVol settings that you configured appears under the Pool <Pool ID> Details table header. For example, if you selected a maximum number of 30 VVol records and the sorting to be based on the IOPS, the following informational message appears: Top 30 Pool V-volumes sort by IOPS. View Smart pool capacity You can view the capacity details for a Smart pool. Click a record in the Pool Tiers pane to view the following details: 220 View disk array components

221 Screen elements Pool Tier Tier Type Description The Pool tier level. Displays the tier levels for the selected Smart pool. Each tier level can be assigned to one of the following drive types: SSD SAS (15 krpm) SAS (10 krpm) SAS (7.2 krpm) External (Low) SATA (7.2 krpm) External (High) External (Mid) Coexistence For example, tier level 1 can be assigned to the SSD drive type, tier level 2 can be assigned to the SAS drive type, and tier level 3 can be assigned to the SATA drive type. Total Capacity Used Capacity Capacity Threshold Displays the total capacity of a tier level. Displays the amount of tier space that is already utilized. Displays the maximum storage that is accepted on a particular tier level. You must have set this capacity threshold value on the P9500/XP7 disk array. % of Tier Configured Displays the percentage of space allocated for each tier from the total pool capacity to create a pool. Max IOPH Processed by the Tier Tier IO Hit % Displays the maximum IOPH value that a tier can process. If the value is NA, it indicates that the latest data from the performance collection or monitoring cycle is unavailable. Displays the rate (%) at which I/O reaches a particular tier as compared against other tiers in a Pool. To understand how different tiers are utilized by a Smart pool, compare the utilization of individual RAID Groups that constitute the Smart pool. You can see the utilization of individual RAID Groups and also the overall utilization of all the RAID Groups. You must determine the drive types of the individual RAID Groups that constitute the Smart pool. Also, you need to know the RAID Group for which you want to determine the drive type. The RAID Groups are listed under Pool Volumes pane. View Continuous Access summary The Continuous Access summary screen provides data on the continuous access configurations (synchronous, asynchronous, and journal based) created in the selected XP or XP7 disk array. The configuration data includes the P-VOL, S-VOL, and associated port, RAID Group details. If the P-VOLs View Continuous Access summary 221

222 and S-VOLs are configured based on the Consistency Group IDs (CTGs) to which they belong, the configuration data of journal groups that manage the corresponding I/O transactions are also displayed. CTGs is a Group ID which guarantees consistency in the sequence of asynchronous data transfers for a remote copy volume group. To view the continuous access data for an XP/XP7 disk array, navigate to the main menu, click Summary, and then click Continuous Access. Then, select the XP/XP7 disk array from the Array menu. The table below describes the data displayed: Table 23: Continuous Access configuration data Screen element Primary Array PVOL Secondary Array SVOL Pair Status Description Serial number of the primary array (primary data center). LDEV configured as P-VOL on the primary data center. Displays the LDEV number in cu:ldev format. Serial number of the secondary array. LDEV configured as S-VOL on the secondary data center. Displays the LDEV number in cu:ldev format. Current replication link status of the P-VOL or S-VOL on the selected array. The replication link status shown is corresponding to the Continuous Access transactions happening on the selected disk array, and can be one of the following: SMPL COPY PAIRED Pair Suspend Pair Suspend Error Pair DUB Reverse Copy Pair SideFile 30% over Pair SideFile over Suspend SVOL Swap ready Suspend Unknown Volume Type Provision Type Type of volume (PVOL or SVOL) configured on the primary data center. Thick or Thin, indicates virtual volume or physical LDEV. Table Continued 222 View disk array components

223 Screen element CA Link Status Description Failed or Active. NOTE: When Continuous Access is configured as Sync or Async and the selected volume type is SVOL, then you might encounter the CA Link status as NA - Not Applicable. Pair Status Alert No. of Paths Alert configuration for the CA pair status. PA retrieves the CA pair status during each performance cycle. If there is a change in the pair status from the previous performance cycle, an notification is sent to the address specified on the Settings screen. These notifications are not sent if there is an intermediate change in the pair status during a performance cycle. You can enable/disable the Pair Status Alert option. To apply the changes, click Actions > Apply Settings. The physical transmission link between the local and remote systems is called the data path. PA displays the number of active Continuous Access paths from a PVOL to SVOL. NOTE: When Continuous Access is configured as Sync or Async and the selected volume type is SVOL, then you might encounter the status of the number of paths as NA - Not Applicable. Fence Level Fence level (is a method of setting rejection of an XP or XP7 Continuous Access write I/O requests from the host according to the condition of mirroring consistency ) of target device (Pair volume). Displays one of the following: ASYNC for asynchronous communications DATA, STATE, or NEVER for synchronous communications JNL for Continuous Access journal based transactions JNL-ID Journal ID of the journal Group associated with the P-VOL or S- VOL. The column value includes the journal ID and mirror ID on which the CA-J pair is created. Example: jnl ID:mirror ID CTG-ID P-VOL Host Port S-VOL Host Port CTG ID, which guarantees consistency in the sequence of asynchronous data transfers for a remote copy volume Group. Host port assigned for the P-VOL. Host port assigned for the S-VOL. Table Continued View disk array components 223

224 Screen element CLPR Description CLPR based on the associated volume type (P-VOL or S-VOL) manages cache for the Continuous Access transactions. MP Blade MP Blade based on the associated volume type (P-VOL or S- VOL) processes requests for the Continuous Access transactions. RG RAID Level RAID Group to which volume type (P-VOL or S-VOL) the LDEV belongs. RAID classification for the RAID Group is determined based on the associated volume type (P-VOL or S-VOL). Table 24: CA/CAJ CTG Performance Data Screen element CTG ID Avg Write IOPS Write IOPS Avg Write MBPS Write MBPS Description CTG to which the P-VOL belongs. Average I/O per second of the LDEV based on the selected CTG ID. Total I/O per second of the LDEV based on the selected CTG ID. Average MB of data written per second to the LDEV based on the selected CTG ID. Total MB of data written per second to the LDEV based on the selected CTG ID. Table 25: Volume performance data Screen elements Volume IOPS MBPS Backend Tracks Avg Read RT Description LDEV configured as the PVOL or SVOL LDEV on the array for which user is viewing the data. The total I/Os on the LDEV per second. The total MB/s of data written to the LDEV based on the selected volume type (S-VOL or P-VOL) per second. Displays the total Backend Tracks associated with selected volume type (S-VOL or P-VOL). The average read response time of LDEV based on the selected the volume type (S-VOL or P-VOL). Table Continued 224 View disk array components

225 Screen elements Avg Write RT Avg Host Port IO Avg Host Port MB CLPR Usage % Write Pending % Side File % MP Blade Util % Description The average write response time of LDEV based on the selected volume type (S-VOL or P-VOL). The average host port assigned per I/O based on the selected volume type (S-VOL or P-VOL). The average host port assigned per MB based on the selected volume type (S-VOL or P-VOL). The total percentage of CLPR data usage that is configured for the selected volume type (S-VOL or P-VOL). The percentage of data pending to be written on an LDEV from the CLPR that is configured for the selected Volume type (S-VOL or P-VOL). The utilization of the side file shown as a percentage for a CLPR that is configured for the selected volume based on the volume type (S-VOL or P-VOL). The average utilization of the MP Blade that is configured for the selected volume based on the volume type (S-VOL or P-VOL). NOTE: The MP Blade average utilization data is collected during the DKC performance data collection. The collection frequency set for the DKC data collection might be different from that set for the LDEV data collection. RG Util % The total utilization of each RAID Group which is configured for the volume based on the volume type (S-VOL or P-VOL). Table 26: Port performance data Screen element Port Attribute Avg IO/sec Avg MB/sec Agg GB/hour Agg GB/day Agg GB/week Description Port assigned for the Continuous Access activity. Provides CA initiator and RCU Target ports. Average I/O rate per second. Average throughput per second. Displays aggregate throughput value for the last collected hour. Displays aggregate throughput value for the last collected day. Displays aggregate throughput value for the last collected week. View disk array components 225

226 Table 27: CA Journal Screen element Mirror Unit Number Consistency Group ID Journal Group Status Description It identifies a pair relationship between journals. When a pair is created, it is assigned a mirror unit number. CTG to which the P-VOL belongs. State of the journal Group, can be one of the following: JSTAT_SMPL: The journal volume that does not have a pair, or deleting. JSTAT_NONE: The specified JID does not exist. JSTAT_P(S)JNN: P(S)vol Journal Normal Normal JSTAT_P(S)JSN: P(S)vol Journal Suspend Normal JSTAT_PJNF: P(S)vol Journal Normal Full JSTAT_P(S)JSF: P(S)vol Journal Suspend Full JSTAT_P(S)JSE: P(S)vol Journal Suspend Error including link failure. Usage (%) Qmarker Qcnt RPO (sec) PVOL Write IOPS PVOL Write MBPS The % utilization of the journal Group. Latest sequence # for writing to the PVOLs consistency Group at the PAIR state. Number of remaining Q-Markers within the journal data. The difference between the data write times for the primary and secondary volumes. It is represented in terms of seconds. Total write I/O per second of the PVOL based on the selected journal ID. Total MB of data written per second to the PVOL based on the selected journal ID. Table Continued 226 View disk array components

227 Screen element Copy rate Description The rate(%) at which data is transferred between storage systems. If the copy rate value is less than 100%, then the incoming data to PVOL or primary site is more in comparison to the data transfer to SVOL or remote site. If the copy rate value is more than 100%, then the incoming data to PVOL or primary site is less in comparison to the data transfer to SVOL or remote site. And, if the copy rate value is equal to 100%, then The amount of data written to primary site and transferred to remote site will be same or equal. Or There will be no incoming data at primary site from host but residual data will still be copied over to secondary site. Journal Async Transfer rate Journal RIO Response rate The average transfer rate (MB/sec) for journals in the storage system. The remote I/O average response time (msec) on the storage system. Table 28: CA Journal Volumes Screen element LDEV ID RG MP Blade MP Blade Util % Backend Transfer (Tracks) LDEV MB/s - Frontend LDEV I/Os - Frontend Description LDEV configured as a journal volume. Displays the LDEV number in cu:ldev format. RAID Group to which the journal LDEVs belong. MP Blade ID processing requests for the journal Group. Average utilization of the MP Blades that are associated with the LDEVs. Total number of backend tracks transferred to or from the XP array backend. Total random and sequential frontend read and write MBs on the journal LDEV during the entire collection interval. Total random and sequential frontend read and write I/Oss on the journal LDEV during the entire collection interval. Table Continued View disk array components 227

228 Screen element Avg Read Resp (msec) Max Read Resp (msec) Description Average read response time of all the journal LDEVs created in a specified RAID Group over the entire data collection interval. Maximum read response time of all the journal LDEVs created in a specified RAID Group over the last 30 seconds of the collection interval. View LDEV summary PA displays the following data by default on the Summary View LDEV screen for all the LDEVs that belong to an XP or an XP7 disk array: Resource Group LDEV ID RG ACP Pair Id CHIP Port Host Group LDEV IO/s- Frontend Avg Read Resp (msec) Avg Write Resp (msec) MP Blade ID IMPORTANT: Since, the CHIP/CHA and ACP/DKA MPs are moved to the MP blades in the P9500/XP7 disk arrays, their MP utilization metrics are not applicable for the P9500/XP7 disk arrays. For more information, see View MP Blade utilization summary for XP7 disk arrays on page 213. You can query the existing performance data in PA for a particular date and time stamp to view the corresponding point in time data for all the LDEVs. By default, the data displayed is for the last performance data collection time stamp and sorted in a descending order. The sorting of data is based on the average read response time of individual LDEVs. You can query the LDEV data for a different date and time stamp and also sort the data based on a different sort type. For more information, see Query and sort LDEV data on page 229. By default, the values displayed for the following are based on the performance values of all the LDEVs that are displayed for the last collection time stamp: Total IOs: The total Frontend I/Os handled by the selected XP or XP7 disk array. Total MBs: The total Frontend throughput in MB/s managed by the selected XP or XP7 disk array. Total Tracks: The total tracks on the selected XP or XP7 disk array. After you query for the LDEV data, the above-mentioned total values are updated accordingly, where they are calculated based on the performance values of all the LDEVs retrieved for the specified date and time stamp. For example, if 100 LDEV records are displayed by default for the last collection time stamp, the total values for the above-mentioned fields are calculated based on the performance values of all the View LDEV summary

229 LDEVs. If you query the existing performance data and 50 LDEV records are displayed, the total values for the Total IOs, Total MBs, Total Tracks are updated accordingly, where the value shown is based on the performance values of only the 50 records. The Total No. of Records displays the total number of LDEV records retrieved for the selected last collection date and time stamp. 150 LDEV records are displayed in every section of the LDEV table. Click the page links to navigate to other sections of the LDEV table and view additional LDEV records. You can also click the links to the pages, or use prev, next, or last links to navigate to the respective pages. Query and sort LDEV data You can query the performance data in the PA database for the last data collection date and time stamp, for which you want to view the LDEV data. By default, your query is executed on the latest performance data received from the selected XP or XP7 disk array. You can also sort the LDEV data that is displayed in the LDEV table. NOTE: The data for the following metrics sorts in the ascending order: Host Group Chip Port ID ACP Pair ID LDEV ID Emulation Cont. Access. The remaining metrics sorts in the descending order. The table below describes the sorting options based on which you can sort the data in the LDEV table. Table 29: Array View LDEV table - Sort By options Screen elements Avg Read Resp (msec) ACP Pair ID Backend Transfer CHIP Port ID Cont. Access Description This is the default selection, where the LDEV data is sorted based on the average read response time of all the LDEVs. The LDEVs with the highest response time are displayed first. Select ACP Pair ID to sort LDEV data based on the ACP pairs. Select Backend Transfer to sort LDEV data based on the number of tracks transferred on the backend. Select CHIP Port ID to sort LDEV data based on the ports connected to the selected XP or XP7 disk array. Select Cont. Access to sort LDEV data based on the continuous access volumes. Table Continued Query and sort LDEV data 229

230 Screen elements Emulation Host Group Jnl Group Description Select Emulation to sort LDEV data based on the emulation data for all the LDEVs. An array group is divided into open volumes of equal size. These volumes are referred to as emulation types. If PA cannot determine the emulation type, appears. This does not affect performance data collection. Select Host Group to sort LDEV data based on the host groups (does not apply to the XP48 Disk Array). Select Jnl Group to sort LDEV data based on the journal volume pool IDs. NOTE: The Jnl Group sort option is displayed only if the journal groups are configured in the selected XP or XP7 disk array. LDEV MB/s - Frontend LDEV IO/s - Frontend Select LDEV MB/s to sort LDEV data based on the frontend throughput (MB/s) of the LDEVs. Select LDEV I/Os to sort LDEV data based on the frontend I/Os of the LDEVs. Frontend I/Os include the total I/Os for the following during the reporting period: Random reads Random writes Sequential reads Sequential writes LDEV ID Select LDEV ID to sort LDEV data based on their cu:ldev IDs. The data displayed in the LDEV table is either black or blue text. The black text indicates that no additional information is available and the blue text indicates hyperlinks that you can click to view the respective component information in a separate browser window. Most of the hyperlinks display performance graphs of components for the associated metrics in the Chart Work Area; all other hot links display information in different formats. To query and sort the data: 1. Select the date and time stamp in the Last Collection section. 2. Click Query. If you do not select a last collection date and time stamp, the current last collection date and time stamp is considered for querying the data. 230 View disk array components

231 IMPORTANT: For an XP24000 Disk Array, the performance data can be collected on LDEVs (64K binary (65,536)). For the XP, XP7 disk arrays with external LDEVs, is displayed under ACP PAIR in the LDEV table, as the external LDEVs do not have a valid ACP pair associated with them. Hence, all the external LDEVs for an XP or XP7 disk array are grouped under the P9500, XP24000, XP20000, XP12000, and XP10000 disk arrays support external LDEVs. For the XP24000, XP20000, XP12000 and XP10000 Disk Arrays, double-click an SLPR or CLPR value to view the respective details. For more information, see SLPR detail view and CLPR detail view. For P9500/XP7 Disk Array, you can directly view the CLPR details. The SLPR details are displayed only for the XP disk arrays. For snapshot and ThP pools in the XP disk arrays, the SLPR and CLPR details are displayed only for the associated LDEVs. The SLPR and CLPR details are not displayed for the LDEVs that form the snapshot and ThP pools. For snapshot and ThP pools in the P9500/XP7 disk arrays, only the CLPR details are displayed for the associated LDEVs. The CLPR details are not displayed for the LDEVs that form the snapshot and ThP pools. A ThP or snapshot pool must have at least one VVOL assigned to it, so that the ThP or snapshot pool is displayed in the LDEV table. 3. Sort the LDEV data based on one of the options displayed in the Sort By list. By default, the LDEV data is sorted based on the average read response and displayed in a descending order in the LDEV table. For more information on the attributes, see Array View LDEV table - Sort By options. Before clicking the Query button, you can also select the attribute from the Sort By list. The LDEV data is automatically sorted based on the selected attribute and displayed in the LDEV table. The sorting is uniform across all the records displayed in the LDEV table and not limited to the current section of records that you are viewing in the LDEV table. Configuring column settings You can view data for various array components in the LDEV table. The data displayed varies based on whether you selected an XP or an XP7 disk array. By default, PA displays data for the following components based on the array you select. The data displayed is for the last collection cycle or the selected date and time range. The following table provides the components for which you can view the data in the LDEV table by default. The Yes and No given under For XP disk arrays and For P9500/XP7 disk arrays columns indicate whether that particular component is displayed for the XP/P9500/XP7 disk array. Configuring column settings 231

232 Screen elements Description For XP disk arrays For P9500/XP7 disk arrays LDEV ID The identification number for the LDEV Yes Yes RG The RAID group to which the LDEV belongs Yes Yes ACP Pair ID The card letters for the ACP pair Yes Yes CHIP Port ID The port ID for the CHIP (CHP) port Yes Yes Host Group The host group to which the host belongs Yes Yes MP Blade ID The identification number of the MP Blade that is currently associated with the LDEV. No Yes In addition, the performance values of LDEVs for the following metrics are also displayed for the XP and the XP7 disk arrays: LDEV IO/s Frontend: The total Frontend I/Os for all random reads, random writes, sequential reads, and sequential writes during the reporting period LDEV MB/s Frontend: The total Frontend throughput in MB/s for the LDEV Average Read Response (msec): The average response of LDEVs for the reads is the average time taken in millisecond for read IOs from the time the Read command is received at the array host port to the time the array has returned a good status frame to the host indicating a successful completion of the command after transferring the data. The data transfer time, which relies on the switch buffer availability to receive data frames, is included in the read response time. Average Write Response (msec): The average time taken in terms of millisecond for write IOs from the time the write command is received at the array host port to the time the array has returned a good status frame to the host indicating a successful completion of the write command after reception of the data from the host. The data transfer time, which relies on the switch buffer availability to send data frames to the array, is included in the write response time. To configure the column settings: 1. Click the Column Settings check box. 232 View disk array components

233 2. Select the check box for the component that must be monitored. Choose the Select All check box, if you want to choose all components and add them as columns of information to the LDEV table. 3. Clear the Column Settings check box to close the LDEV Column Settings list. Accordingly, new columns are added and the data for the selected components is displayed under the respective new columns in the LDEV table. Components and metrics in the LDEV Column Settings list The table below lists components available for selection in the LDEV Column Settings list: Table 30: Components and metrics in LDEV Column Settings list Screen elements ACP Pair ID ACP Pair Util Description The card letters for the ACP pair. The percentage of the ACP pair processors usage, during the reporting period. NOTE: This metric is available only for the XP disk arrays. Avg Read Resp (msec) Avg Write Resp (msec) Backend Transfer BC Vol 0 BC Vol 1 BC Vol 2 Cache Size (GB) The average read response time, in millisecond, for the LDEV. The average write response time, in millisecond, for the LDEV. The total number of tracks transferred on the Backend. The business copy volume 0 mode. The business copy volume 1 mode. The business copy volume 2 mode. The total cache size in gigabytes. Table Continued View disk array components 233

234 Screen elements CHP Port ID CHP Util Description The port ID for the CHIP (CHP) port. Provides the option to view information associated with a particular port or with all ports. The percentage that the CHP processors were used during the reporting period. NOTE: This metric is available only for the XP disk arrays. CLPR Cont. Access Device File The CLPR ID. Continuous Access mode. The name of the device file. NOTE: If an array is connected to a host agent that is running on HP-UX 11i v3 operating system, the DSF is displayed in a new format. A legacy DSF is displayed in parenthesis next to the new format. Backend Transfer Sequential Reads Backend Transfer Non-Sequential Reads Backend Transfer Writes Emulation The Backend transfer sequential reads for the LDEV. The Backend transfer non-sequential reads for the LDEV. The Backend transfer writes for the LDEV. An array group is divided into open volumes of equal size. These volumes are referred to as emulation types. If PA cannot determine the emulation type, an error appears. The error does not affect performance data collection. NOTE: The emulation type is displayed as "Not Known" for emulations that start with "OPEN-XP". E-LDEV Ext-Lun E-Port(s) E-Seq Host ID (Host identifier) The external LUN LDEV ID on the external array. Indicates that the LDEV is an Ext-Lun. The following options are available:- (hyphen) = Normal LUN E = Ext-Lun P = Ext-Lun provider (this LDEV is used as an Ext-Lun for another array) A list of Ext-Lun initiator ports (ports used to connect to an external array). The Ext-Lun provider's serial number for the array. The name of the host machine. PA discovers LDEV-to-CHIP port connectivity. Unknown is displayed if the host name is unknown. This automatic CHIP-LDEV mapping works only for open volumes. Table Continued 234 View disk array components

235 Screen elements Host Group IO Size Jnl Group LDEV ID LDEV IO/s LDEV MB/s Load Inhibit Count LUN (Logical Unit Number) ID MP Blade Id Description The port host group. Displays the average IO size in KBs for the selected LDEV. The average IO size is derived by dividing the total MB/s with the total I/Os for that LDEV (IO size = LDEV MB Total/LDEV IO Total). The continuous access journal pool IDs. The identification number for the LDEV. The total Frontend I/Os for all random reads, random writes, sequential reads, and sequential writes during the reporting period. The total Frontend throughput in MB/s for the LDEV. Count in Cache Load Inhibit Mode. The identification number of the LUN. The identification number of the MP blade that is currently processing requests for an LDEV. The MP Blade ID includes the cluster # and the blade location. For example, MPB-1MA, where 1 indicates the cluster # and MA indicates the blade location. NOTE: This component is displayed only for the P9500/XP7 disk arrays. Max Read Resp (msec) - valid for last 30 secs Max Write Resp (msec) - valid for last 30 secs RG Random MB Reads - Frontend Random MB Writes - Frontend Random Reads - Frontend Random Read Cache Hits - Frontend Random Write Cache Hits - Frontend Random Writes - Frontend The maximum read response time, in millisecond, for the LDEV. The maximum write response time, in millisecond, for the particular LDEV. The RAID Group to which the LDEV belongs. The random Frontend reads throughput in MB/s for the LDEV. The random Frontend writes throughput in MB/s for the LDEV. The random Frontend I/O reads. The random Frontend I/O read cache hit values. The random Frontend I/O write cache hit values. The random Frontend I/O read write values. Table Continued View disk array components 235

236 Screen elements Sequential Read Cache Hits - Frontend Sequential Reads - Frontend Sequential Writes - Frontend Sequential MB Reads - Frontend Sequential MB Writes - Frontend Sequential Write Cache Hits - Frontend SS ID SLPR Description The sequential Frontend I/O read cache hit values. The sequential Frontend I/O read values. The sequential Frontend I/O write values. The sequential Frontend reads throughput in MB/s for the LDEV. The sequential Frontend writes throughput in MB/s for the LDEV. The sequential Frontend write cache hit values in per second. The identification number of the subsystem. The SLPR group ID. NOTE: SLPR does not exist in the P9500/XP7 disk arrays. So, the SLPR group ID is displayed only for the XP disk arrays. Target:LUN Vol. Group Attribute Resource Group The LUN associated with the given LDEV. The volume group identification name if the device is associated with a volume group. PA reports volume groups from LVM (an HPE brand) and VXVM (a Veritas brand). Indicates a volume is of ALU (Administrative Logical Unit)/ SLU (Subsidiary Logical Unit) type. Represents the number of resource groups that are part of the selected VSM. NOTE: The E-LDEV, Ext-LUN, E-Port(s), E-Seq, Jnl Group, and Vol. Group are available for selection only if they are configured in the selected XP or XP7 disk array. The following metrics are not applicable for the XP/XP7 continuous access journal pool LDEVs. NA is displayed: Response Time Metric category: Maximum Write Response and Average Write Response Frontend IO Metric category: Random Write Cache Hits, Random Writes, Sequential Write Cache Hits, and Sequential Writes Frontend MB Metric category: Random MB Writes and Sequential MB Writes 236 View disk array components

237 View replication data for LDEVs PA provides details on whether a particular LDEV is configured as one of the following for P9500/XP7 Business Copy or P9500/XP7 Continuous Access Synchronous (The Continuous Access Synchronous Software provides remote replication between the disk arrays that belong to the P9500/XP7 and the XP families.) : Primary volume (PVOL) : The volume in a copy pair that contains the original data to be replicated. The data on the P-VOL is duplicated synchronously or asynchronously on the secondary volume (S- VOL). Secondary volume (SVOL): The volume in a copy pair that is the copy of the original data on the primary volume (P-VOL). It also provides the current replication pair status between the PVOLs and SVOLs. Two logical volumes are paired when they are in a replication relationship in which one volume contains original data to be copied and the other volume contains the copy of the original data. The copy operations can be synchronous or asynchronous, and the pair volumes can be located in the same storage system (in-system replication) or in different storage systems (remote replication). IMPORTANT: If the state for an LDEV displays up as SMPL (Simplex), it means that the LDEV is neither configured as a PVOL or SVOL. 1. In the LDEV Summary View, click the Column Settings check box. 2. To view the continuous access volumes, select the Cont. Access check box in the LDEV Column Settings list. To view the business copy volumes, select the check box for the BC Vol 0, BC Vol1, or BC Vol2, or select all the three volumes in the LDEV Column Settings list. The columns are automatically updated. 3. Clear the Column Settings check box to close the LDEV Column Settings list. 4. In the LDEV table, click a continuous access or business copy volume to view the associated PVOL or SVOL, and their current replication pair status. The following are the different replication pair statuses: Replication pair statuses SMPLex COPY PAIRED Pair Suspend Split Description A volume that is not assigned to a pair is in simplex status. When copy processing is started, the primary system changes the status of the P-VOL and S-VOL to COPY. When the initial copy processing is complete, the primary system changes the status of both data volumes to PAIRED. When a pair is split by the user, the primary or secondary system changes the status of the P-VOL and S-VOL to Pair Suspend Split. Table Continued View disk array components 237

238 Replication pair statuses Pair Suspend Error Description When a pair is suspended due to an error condition, the primary system changes the P-VOL and S-VOL status. Reverse Copy The replication is in a reverse copy mode, from S-VOL to P- VOL. Pair SideFile 30% over The continuous access asynchronous side file usage is over 30%. Export LDEV data You can export LDEV data to an Excel spreadsheet for the date and time range that you specify. The data for all the LDEVs monitored by PA during the specified start and end date, and time are exported to a spreadsheet in a CSV format. 1. Select the date range from the Start Date and End Date calendars. 2. Select the time range from the respective hour : minutes : seconds lists. 3. Click Export to Excel. 4. Click OK to continue. A record for the export activity is logged in the Event Log screen. The record includes the name of the XP or XP7 disk array, and the date and time when the export activity was initiated. After the data is exported, another record is logged in the Event Log screen. The LDEV data is exported to the CSV file located at: Local_drive:\HPSS\pa\tomcat\webapps\pa \export. All CSV files are available in this specified location. The local drive on the management station refers to C:, which includes the Windows operating system and the HPSS folder. A separate CSV file is created for every export operation. The existing CSV file is not replaced by the new file. The file format is as follows: LDEV_<Array>_Timestamp. (Timestamp refers to the date and time when the export operation is initiated). View RAID Group information Click an RG ID item in the LDEV table to view the RAID Group details and disk mechs details. The Disk Mech displays the 2-way and 4-way parity group concatenation. The 2-way parity group concatenation can be configured as either 2D+2D or 7D+1P. The 4-way parity group concatenation is configured as 7D +1P. The system supports RMLIB version and later. If there is an LDEV that is associated with two RAID Groups, data about both the RAID Groups are displayed. If LDEVs are associated with multiple RAID Groups (such as in 4D+4D configurations where the LDEV is mirrored on two different RAID Groups), these multiple RAID Groups are treated as separate sets of items. For example, if you have an LDEV with RAID Group , you must select in the dropdown menu. An LDEV mapped to is treated separately from a RAID Group mapped only to 1-1 or only to 1-2. You can also view detailed information for a RAID Group, such as the data transfer and backend transfer information, average and maximum read response, average and maximum write response for each LDEV. Click the corresponding RG ID in the CLPR window. The line above the RAID Group table indicates the hierarchical information of the selected RAID Group. For more information on view CLPR information, see CLPR detail view. 238 View disk array components

239 Continuous Access Journal Detail View The Continuous Access Journal is an asynchronous mirroring program similar to the Continuous Access Asynchronous, except that the transactions to be written to the secondary disk array are maintained in a disk-based journal file. This provides better performance for the secondary disk array systems that are not highly available or that may be subject to bandwidth contention from other applications. Double-click a Journal group volume ID in the Jnl Group column to open the Continuous Access Journal Detail View screen. A list of LDEVs configured in the continuous access journal volume displays; a maximum of 16 LDEVs display. The status on backend transfers and average read response of each LDEV associated with the journal group is also shown. Additionally, for an P9500/XP7 disk array, the MP blade processing the I/O requests for the journal LDEVs is also shown. JID: Journal Group ID MUN: Mirror Unit Number like in BC C TID: Consistency Group ID Status: Journal group status Usage: % full of the journal group Qmarker: Current data address being transferred, indicates the latest seq# for writing to the P-Vols CTGroup at the PAIR state. Qcnt: Pending writes, shows the remaining total Qmarker within the journal data. If there are no associated continuous access journal groups configured, is displayed in the Jrnl Grp column. View ThP Pool Occupancy information The ThP volumes that belong to a ThP Pool are displayed as THP-PID(<poolid>)and snapshot volumes that belong to a snapshot pool are represented as Snap-PID(<pool id>), under RAIDGroup in the LDEV table. Click a THP-PID(<poolid>) to view the ThP pool occupancy information, which includes the mapping between the pool volumes (real LDEVs) and the V-VOLs. Screen elements THP Pool ID THP Pool Status Description The ID of the pool (pool number) The status of the ThP Pool 0: Undefined/Creating/Deleting Specified pool does not exist completely. 1: Normal 2: Pool capacity beyond threshold 3: Pool capacity reached 100% of the pool 4: Failure, cannot show further information for the pool POOL Threshold 1 A user configurable pool threshold (varying between 5% - 95% in increments of 5%). The default value is 70%. This is the high for the pool. Table Continued View disk array components 239

240 Screen elements POOL Threshold 2 THP Pool capacity THP Pool unused capacity VVOL ID VVOL Threshold (%) VVOL Capacity (MB) REAL LDEV RG(s) ACP Pair Description The threshold level that indicates the WARNING level for the pool. This value is always 80% and cannot be changed. The total capacity of all the ThP volumes in the pool. The available capacity for the ThP pool. The CU:LDEV identifier for the virtual volume (V-VOL). The threshold set for the V-VOL in ThP pool. The capacity of the virtual volume (in MB). The physical LDEV that is part of the pool. The RAID Group of the real LDEV. The ACP pair that manages the real LDEV. View port information Click a CHP Port ID value in the LDEV table to view the Port Information window. The DKC serial number is displayed. In addition, the port type, such as Fibre or FCoE (applicable only for P9500/XP7 disk arrays) is also displayed for the respective CHIP port ID. Click the IO/s Charts or MB/s Charts to view the respective charts. IMPORTANT: The LDEV table does not display hyperlinks in the ACP Pair ID and ACP Pair Util fields for RAID Groups spanning across multiple ACP pairs. Hence, no chart for the same can be created. An XP24000 type array has 32 CHIPs, 8 ACP pairs, and 4 MPs per port, an XP20000 type array has 8 CHIPs, 4 ACPs and 4 MPs per port. All virtual volumes that are not associated with any pool, do not have RAID Group information, and of unknown type, are considered as a single non-existing (virtual) volume group. This single non-existing (virtual) volume group is denoted as VVol-Grp in the RG column in the LDEV table and Performance data is collected for all these volumes. View SLPR information Storage Logical Partition (SLPR) is a partition of the RAID500 to which the host ports (1 or more) and the CLPRs (1 or more) are assigned. The SLPR0 will always exist (cannot be deleted). Sometimes, the SLPR acronym includes an additional word. For example, Storage administrator Logical Partition or Storage management Logical Partition, both mean the same. The purpose of the SLPR is to allow multiple administrators to manage a subsystem without the risk of causing mistakes that can destroy another user's volumes, or reduce other user's expected performance by using more components (e.g. cache) than required. 240 View disk array components

241 IMPORTANT: The SLPR component is applicable only for the XP disk arrays. It does not exist in the P9500/XP7 disk arrays. As a result, the SLPR-related data is not displayed in HPE XP7 Performance Advisor for the P9500/XP7 disk arrays. Click an SLPR value in the LDEV table to view the detail view for that SLPR in a separate browser window. In the SLPR window, the line above the table indicates the hierarchical information of the selected SLPR. The SLPR view includes two tabs, the CLPR tab and the Ports tab. The CLPR tab displays the following details for each CLPR in the selected SLPR: Cache size Write pending data Sidefile usage data Read hits data Click a CLPR to view the associated details. For more information, see CLPR detail view. The Ports tab displays the following details for each port associated with the selected SLPR: Maximum IO/s Average IO/s Minimum IO/s Maximum MB/s NOTE: By default, the value displayed is 0 for arrays that do not support SLPR or CLPR. View CLPR information Click a CLPR value in the LDEV table to view the detail view for that CLPR in a separate browser window. In the CLPR window, the line above the table indicates the hierarchical information for the selected CLPR. The CLPR details include the following: Cache size Write pending data Sidefile usage data Read hits summary data Additionally, the CLPR table displays the following details: RAID group LDEV IO/s LDEV MB/s Backend transfer data Overall RAID group utilization percentage Click a column heading to order the table by that column. View disk array components 241

242 Click a RAID group to view the corresponding RAID group details. For more information, see View RAID Group information on page 238. View CHA summary Channel adapter (CHA) is a device that provides the interface between the array and the external host system. Occasionally, this term is used synonymously with the term channel host interface processor (CHIP). To view the summary of installed CHIPs/CHA ports, navigate to Summary View > CHA Summary. Further, click on an individual CHIP/CHA group box to redirect you to the CHA Info screen for detailed view of the performance summary. For more information, see View CHA Info. The summary is displayed in the CHIP pane in CHA Summary. The CHIP/CHA summary table includes the performance and utilization metrics of all the installed CHIPs/CHAs. The CHA summary table includes only the performance metrics of all the installed CHIP. The following images display the CHIP Port Activity Ave group box and the CHIP/CHA summary table for 10090, which belongs to the XP24000 Disk Array type: The following images display the CHIP Port Activity Ave group box and the CHA summary table for 53036, which belongs to the P9500 Disk Array type: The following table describes the CHIP/CHA summary table for an XP disk array and the CHA summary table for an P9500/XP7 disk array. 242 View CHA summary

243 CHIP/CHA summary table for XP disk arrays includes... The CHIP or the CHA name Example: CHA-1EU The individual MPs and their utilization percentage CHA summary table for P9500/XP7 disk arrays includes... The CHA name Example: CHA-1F, 1 indicates the cluster # where the CHA board is located. Not applicable In the above image, XX in CHPXX-IEU indicates the corresponding Processor ID, which can be 00, 01, 02, or 03. The CHPXX-IEU utilization % indicates the Processor utilization percentage, which is as follows: CHP00-1EU is 0%. CHP01-1EU is 0%. CHP02-1EU is 0%. CHP03-1EU is 0%. Table Continued View disk array components 243

244 CHIP/CHA summary table for XP disk arrays includes... The Processor to port mapping data, where the associated port IDs appear in the respective Processor utilization blocks. In the above image, the Processor-port mapping for the CHIP, CHA-1EU is as follows: CHA summary table for P9500/XP7 disk arrays includes... Not applicable CHP00-1EU - associated ports are CL1A and CL5A. CHP01-1EU - associated ports are CL3A and CL7A. CHP02-1EU - associated ports are CL1B and CL5B. CHP03-1EU - associated ports are CL3B and CL7B. The average port activity (I/Os and MB/s) of all the associated ports in the Port Activity Avg block. The average port activity (I/Os and MB/s) of all the associated ports in the Port Activity Avg block. In the above image, the average port I/Os is 5313 and the average port MB/s is In the above image, the average port I/Os is and the average port MB/s is NOTE: Since, the CHIP/CHA and the ACP/DKA MPs are moved to the MP blades in the P9500and XP7 disk arrays, their MP utilization metrics are not applicable for the P9500and XP7 disk arrays. View DKA summary Disk Adapter (DKA) is the hardware component that controls the transfer of data between the drives and cache. A DKA feature consists of a pair of boards. In an XP disk array, the DKA is one of the two PCB types that contains the MPs. The DKA summary table includes the performance and utilization metrics of all the ACP/DKAs configured in the array. To view the summary of all the installed DKA ports, navigate to Summary View > DKA Summary. To view complete metrics, click on the individual ACP group box. Clicking the box redirects you to the DKA Info screen for the selected record. For more information, see View DKA Info. The following images display the DKA summary table for 10090, which belongs to the XP24000 Disk Array type: 244 View DKA summary

245 View CHA Info In the CHA Info page, select the required CHA from the CHA menu to view the summary table of an individual CHA. The following table describes the data for an individual CHIP/CHA. The Yes and No given under the columns indicate whether that particular information or metric is displayed for the XP/P9500/XP7 disk array. Individual CHIP/CHA data For XP disk arrays For P9500/XP7 disk arrays Summary The number of associated ports Yes No The protocol used Yes The port type can be Fibre. Yes The port type can be Fibre or FCoE. Table Continued View CHA Info 245

246 Individual CHIP/CHA data For XP disk arrays For P9500/XP7 disk arrays The utilization percentage of each Processor on the selected CHIP/CHA. Click an individual Processor utilization block to view the corresponding utilization graph in the Chart View. By default, the utilization data displayed is for the last one hour. For more information on charts and using chart options, see Plot charts. The average I/Os and throughput of data in MB/s on all the ports in the selected CHIP/CHA Port Details The individual MPs on the selected CHIP/CHA The IDs of the associated ports on the selected CHIP/CHA The maximum and minimum I/Os on individual ports The maximum, minimum, and average throughput of data in MB/s on individual ports Yes Yes Yes Yes Yes Yes No Yes No Yes (the port IDs are directly displayed under the selected CHA. Yes Yes Example The following image displays the CHA-1EU performance data for 53036, which belongs to the P9500 Disk Array type. The individual performance data for CHA-1EU includes the following: Summary: 246 View disk array components

247 Fibre protocol is used. Eight ports are associated with CHA-1EU. Port Activity Avg shows average I/Os as 7.2, which is an average of the overall average I/Os on all the eight ports. It also displays the average MB/s as 0.25, which is an average of the overall average MB/s on all the eight ports. Port Details: CL1A, CL5A, CL3A, CL7A, CL1B, CL5B, CL3B, and CL7B are the ports for CHA-1EU. Maximum and minimum I/Os. Maximum, minimum, and average throughput of data in MB/s. For example, the Max I/Os and Min I/Os for CL1A are both currently 0. View DKA Info Click an individual DKA in either the DKA summary table or the DKA Info table to view the data. The following table describes the data for an individual ACP/DKA. The Yes and No given under the columns indicate whether that particular data or metric is displayed for the XP/P9500/XP7 disk array. Individual DKA data For XP disk arrays For P9500/XP7 disk arrays Summary The Processors on the individual DKA and their utilization percentage For example, if you selected the AUMU DKA pair, you can view the Processors and also their utilization percentage on AU. Similarly, you can also view the above-mentioned details for MU. The backend transfers for the selected DKA which includes the sequential and nonsequential reads and writes. In addition, the combined backend transfer value is also displayed as an aggregate value for the DKA. RAID Group Information The RAID group ID and the RAID level Associated disk drives Related metrics, such as the sequential and nonsequential reads, and writes Yes Yes Yes No Yes Yes Example: The following image displays the AUMU backend transfers and RAID Group information for 53036, which belongs to the P9500 Disk Array type. View DKA Info 247

248 Multi array virtualization With Multi Array Virtualization (MAV), it is possible to show devices configured in multiple DKCs as the same device with alternate path to a server. This function enables to change locations of data managed by multiple DKCs freely by exceeding physical boundaries of DKCs without affecting the server (while maintaining the online state). Also by assigning actual volumes that exist in multiple physical DKCs as virtual volumes, the function enables server and management server to perform management/operation without recognizing physical boundaries of DKCs. Virtual Storage Machine About Virtual Storage Machine Virtual Storage Machine (VSM) is a frame that manages virtual ID configured in a physical DKC. Even in an actual resource environment irrelevant to MAV, VSM#0 is defined. When the VSM is configured in the storage array, PA collects all the resources data configured in the storage array including the resources in the VSM through the command device configured in the VSM#0. All the discovered resources in the array will be displayed under the storage array serial number. VSM/Resource Group screen details Resource Group Screen elements DKC Resource Group No of LDEVs No of RGs Description Displays the storage array serial number Displays the storage array resource groups that are part of VSM Displays the number of LDEVs belonging to the given Resource Group Displays the number of RGs belonging to the given Resource Group Table Continued 248 Multi array virtualization

249 Screen elements No of Ports No of Host Groups IOPs MBPs Backend transfer Average Response Time(msec) Max Response Time(msec) Collection Time Description Displays the number of Ports belonging to the given Resource Group Displays the total number of Host Groups from the storage arrays that are part of the VSM Displays the aggregate of all IOPS reported on the LDEVs that are part of the given Resource Group Displays the aggregate of all MBPS reported on the LDEVs that are part of the given Resource Group Displays the total number of Backend tracks transferred to or from the array Backend Displays the average response time across LDEVs that are part of the given Resource Group Displays the maximum response time across LDEVs that are part of the given Resource Group Displays the last successful RG performance collection time for the storage array RG Summary Screen elements DKC Array Model Resource Group RG CLPR CLPR Name Collection time LDEV IO/s LDEV MB/s Description Displays the storage array serial number Displays the storage array model Displays the storage array resource groups that are part of VSM Displays the storage array RAID Group that are part of VSM Displays the cache partition ID configured for the RAID Group Displays the cache partition name Displays the last successful RAID Group performance collection time for the storage array Displays the total frontend I/Os for all random reads, random writes, sequential reads and sequential writes during the reporting period for all LDEVs that are part of the given RAID Group Displays the total frontend throughput in MB/s for the LDEVs that are part of the given RAID Group Table Continued View disk array components 249

250 Screen elements Backed transfer Combined Backed Transfer Description Displays the total number of Backend tracks transferred to or from the array Backend Displays the combined Backend transfer value. * indicates one of the following: If any of the physical LDEVs from a RAID Group is configured in multiple ThP pools, the sum of the Backend transfer on all the ThP pools will be shown as combined Backend transfer for that RAID Group. (The Backend transfer of each ThP pool is the sum of Backend transfer on V-Vols belonging to that ThP pool) If physical LDEVs from multiple RAID Groups are configured in a ThP pool, the combined Backend transfer will be reported as an aggregate value for all the RAID Groups % RAID Group Utilization Random Read % RAID Group Utilization Random Write % RAID Group Utilization Random Write Parity % RAID Group Utilization Sequential Read % RAID Group Utilization Sequential Write % RAID Group Utilization Sequential Write Parity Overall % RG Utilization Displays the random read utilization percentage for the RAID Group Displays the random write utilization percentage for the RAID Group Displays the random write parity utilization percentage for a RAID Group Displays the sequential read utilization percentage for a RAID Group Displays the sequential write utilization percentage for a RAID Group Displays the sequential write parity utilization percentage for a RAID Group The overall percentage utilization of a RAID Group, which is the sum of the random reads, random writes, random write parity, sequential reads, sequential writes, and the sequential write parity. Port summary Screen components DKC Array Model Resource Group Description Displays the storage array serial number Display the storage array model Displays the storage array resource groups that are part of VSM Table Continued 250 View disk array components

251 Screen components CHP Port ID Port Type E-seq(s) Collection Time Max IO/s Avg IO/s Min IO/s Max MB/s Min MB/s Avg MB/s Description Displays the port ID for the CHP port Displays the port type, such as FCoE (applicable forp9500/xp7 disk arrays) for the port ID Displays the Ext-Lun provider's serial number for the array Last successful port performance collection time for the storage array Displays the maximum frontend I/Os on the port Displays the average of the total frontend I/Os Displays the minimum frontend I/Os on the port. Displays the maximum frontend throughput in MB/s Displays the minimum frontend throughput in MB/s Displays the average frontend throughput in MB/s LDEV Screen components DKC Model DKC Latest Collection time RG Latest Collection time Port Latest Collection time Description Displays the storage array serial number Displays the storage array model Displays the latest performance data collection time of all the LDEVs in the disk array Displays the latest performance data collection time of all the RG in the disk array Displays the latest performance data collection time of all the ports in the disk array Volume Performance data Screen components Volume IOPS MBPS Description Displays if the LDEV configured as the PVOL or SVOL LDEV on the array for which user is viewing the data Displays the total I/Os on the LDEV per second Displays the total MB/s of data written to the LDEV based on the selected volume type (S-VOL or P-VOL) per second Table Continued View disk array components 251

252 Screen components Backend Avg Read RT Avg Write RT Avg Host Port IO Avg Host Port MB CLPR Usage % Write Pending % Side File % MP Blade Util % RG Util % Description Displays the total Backend tracks associated with selected volume type (S-VOL or P-VOL) Displays the average read response time of LDEV based on the selected the volume type (S-VOL or P-VOL) Displays the average write response time of LDEV based on the selected volume type (S-VOL or P-VOL) Displays the average host port assigned per I/O based on the selected volume type (S-VOL or P-VOL). Displays the average host port assigned per MB based on the selected volume type (S-VOL or P-VOL) Displays the total percentage of CLPR data usage that is configured for the selected volume type (S-VOL or P-VOL) Displays the percentage of data pending to be written on an LDEV from the CLPR that is configured for the selected volume type (S-VOL or P-VOL) Displays the utilization of the side file shown as a percentage for a CLPR that is configured for the selected volume based on the volume type (S-VOL or P-VOL) Displays the average utilization of the MP Blade that is configured for the selected volume based on the volume type (S- VOL or P-VOL)The MP Blade average utilization data is collected during the DKC performance data collection. The collection frequency set for the DKC data collection might be different from that set for the LDEV data collection. Displays the total utilization of each RAID Group which is configured for the volume based on the volume type (S-VOL or P-VOL). Port Performance Data Screen components CA-Port Attribute Avg IO Avg MB Description Displays the port assigned for the continuous access activity Displays the type of the port, such as the Fibre (Cont Acc Initiator), or Fibre (Ext-Lun Initiator) Displays the average I/O rate Displays the average throughput Performance Summary 252 View disk array components

253 Screen components DKC DKC Time RG Time Port time IOPS MBPS Backend Transfer Avg Response Time Max Response Time Description Displays the storage array serial number Displays the average DKC time Displays the average RG time Displays the average port time Displays the aggregate of all IOPs reported on the LDEVs that are part of the given Resource Group Displays the aggregate of all MBPs reported on the LDEVs that are part of the given Resource Group Displays the total number of Backend tracks transferred to or from the array Backend Displays the average response time across LDEVs that are part of the given Resource Group Displays the maximum response time across LDEVs that are part of the given Resource Group Configuration Summary Screen component VSM Serial Number Model Storage Arrays Resource Groups Virtualized Volume LDEVs RAID Groups Ports Host groups Description Displays the serial number of the disk array. Displays the model number of the disk array. Displays the number indicates the number of storage arrays that are part of the selected VSM. Displays the number of resource groups that are part of the selected VSM. Displays the total number of volumes that are virtualized in the VSM Displays the total number of LDEVs from the storage arrays that are part of the VSM Displays the total number of RAID Groups from the storage arrays that are part of the VSM Displays the total number of Ports from the storage arrays that are part of the VSM Displays the total number of Host Groups from the storage arrays that are part of the VSM View disk array components 253

254 View component report summary Procedure 1. On the VSM/Resource Group screen, select the array from the Arrays filter. 2. From the Component filter, select any the following components: Resource Group RG Summary Port Summary Ldev Pair Information Performance Summary Configuration Summary The configuration and performance details for the selected component are displayed. Business Copy About Business Copy Business Copy is local mirroring technology used to create and maintain a full copy of any volume in the storage system. The business copy volumes comprise of individual physical LDEVs. Using this option user can create one or more copies of a data volume within the same storage system. BC copies can be used as backups, with secondary host applications, for data mining, and for testing while business operations continue without stopping host application I/O to the production volume. A pair is created when user select a volume to duplicate. This volume becomes the P-VOL volume (P-VOL), and then identify another volume to contain the copy. This becomes the secondary volume (S-VOL). Associate the P-VOL and S-VOLs and perform the initial copy. Business copy screen details To view the business copy volumes, select the Business Copy from the main menu. Select the array from the Array filter, the master pane displays the list of PVOLs for that array. The SVOL and pair status is displayed in BC Pair Status for the selected LDEV ID. PA provides details on whether a particular LDEV is configured as one of the following for Business Copy: PVOL (primary volume) SVOL (secondary volume) Business Copy Volumes 254 View component report summary

255 Screen elements LDEV ID BC Vol.0 BC Vol.1 BC Vol.3 Description Displays the identification number for the LDEV. This is PVOL for which data is duplicated. Displays the business copy volume of first level. Displays the business copy volume of first level. Displays the business copy volume of first level. BC Pair status Screen elements Pvol Svol Pair Status Description Displays the list of LDEV ID that are primary volumes in the array. Displays the list of LDEV ID that are secondary volumes in the array. Displays the pair status between the PVOLs and SVOLs. The different replication pair statuses are: SMPlex: Volume is not configured for replication activity Copy: Volume is in the Copy mode, where data from the P- Vol is being copied to the S-Vol Paired: Volumes are configured for replication activity. Pair suspended: The replication pair volumes are suspended mode Pair suspended error: The replication pair volumes are suspended, as an error is noticed with the pair Reverse copy: The replication is in a reverse copy mode, from S-VOL to P-VOL. High Availability About High Availability High Availability (HA) enables you to create and maintain synchronous, Remote Copy of data volumes on the HPE XP7 Storage (HPE XP7) system. A virtual storage machine is configured in the primary and secondary storage systems using the actual information of the primary system, and the High Availability primary and secondary volumes are assigned the same virtual LDEV number in the virtual storage machine. Because of the same virtual number, the pair volumes are seen by the host as a single volume on a single storage system, and both volumes receive the same data from the host. A quorum disk located in a third and external storage system is used to monitor the HA pair volumes. Both storage systems access the quorum disk to check on each other. A communication failure between systems results in a series of checks with the quorum disk to identify the problem for the system able to receive host updates. High Availability 255

256 When HA is configured on the storage arrays, all the HA pairs are listed as a normal volume with the actual LDEV name and storage array serial number. The performance IOs on the HA pair LDEVs are represented based on the Host IO mode. High Availability screen details NOTE: Only the pair status is updated with every ports performance collection cycle for the arrays. The remaining values are updated with the configuration collection cycle. Screen component High Availability Pair VDKC VLDEV PVOL Pair Status Description Displays the virtual storage array serial number. Displays virtual LDEV that would be presented to the host. Displays the pair status of the P-VOL, and can be one of the following: Pair: The pair is synchronized. Copy: The initial copy is in progress; data is being copied from the P-VOL to the S-VOL. PSUS: The pair is suspended by the user. This status appears on the P-VOL PSUE: The pair was suspended due to a failure. PVOL Path group ID Primary DKC PVOL Host IO Mode Displays the path group ID of PVOL. Displays the primary storage array serial number. Displays the Host IOs modes, which represents the I/O actions on the PVOL of an HA pair: Mirror( Read Local) Mirror (Read Remote) Local Block Remote PVOL PVOL Last Modified Time Displays all the P-VOL in the array and corresponding S-VOL is listed in the same row under S-VOL. Displays the time stamp when the pair status PVOL was last modified. Table Continued 256 High Availability screen details

257 SVOL pair status Displays the status of the P-VOL or S-VOL, and can be one of the following: Pair: The pair is synchronized. Copy: The initial copy is in progress; data is being copied from the P-VOL to the S-VOL. SSUS: The pair is suspended by the user, and update of the S-VOL is interrupted. SSWS: The pair was suspended either by the user or due to a failure, and update of the P- VOL is interrupted. This status appears on the S-VOL. PSUE: The pair was suspended due to a failure. SVOL Path group ID Secondary DKC SVOL SVOL Host IO Mode Displays the path group ID of SVOL Displays the secondary storage array serial number. Displays all the S-VOL in the array and corresponding P-VOL is listed in the same row in P-VOL. Displays the Host IOs modes, which represents the I/O actions on the SVOL of an HA pair: Mirror( Read Local) Mirror (Read Remote) Local Block Remote Quorum Disk ID Mirror ID SVOL Last Modified Time Primary Side Primary DKC LDEV LDEV Metrics Displays the quorum disk ID. Displays the mirror ID. Displays the time stamp when the SVOL was last modified. Displays the primary storage array serial number. Display the actual LDEV ID. Displays the value of Total IOPS, Total MBPS, Average Response Time metrics for the configured LDEV. Table Continued View disk array components 257

258 Port Port Metrics Host Group Host Group Metrics Secondary Side Secondary DKC LDEV LDEV Metrics Port Port Metrics Host Group Host Group Metrics Ports Primary DKC Primary Port Primary Ports Metrics Secondary DKC Secondary Port Secondary Port Metrics Displays the ports that are configured for a primary volume. Displays the value of Total IOPS, Total MBPS, and Average Response Time metrics for the configured port. Displays the Host Groups that are configured for a primary volume. Displays the value of the Total IOPS, total MBPS, and Average Response Time metrics for the configured Host groups. Displays the secondary array serial number. Display the actual LDEV ID. Displays the value of primary metrics of configured LDEV. Displays the ports that are configured for a secondary array. Displays the value of total Port IOPS, total Port MBPS, and Average Response Time metrics for configured ports. Displays the Host Groups that are configured for a secondary array. Displays the value of the IOPS, MBPS and Average Response Time metrics for the configured Host groups. Displays the primary storage array serial number. Displays the ports that are configured as initiator for the primary array. Displays the value of the IOPS, MBPS, Aggregate Avg GB per Hour, Aggregate Avg GB per Day, and Aggregate Avg GB per week metrics for the configured initiator port. Displays the secondary storage array serial number. Displays the ports that are configured as target for the secondary array. Displays the value of the IOPS, MBPS, Aggregate Avg GB per Hour, Aggregate Avg GB per Day, and Aggregate Avg GB per week metrics for the configured target port. 258 View disk array components

259 View High Availability information Prerequisites Ensure that the primary array and the secondary array are managed using the same management station. Ensure that the configuration and the performance collections are performed for both the arrays. NOTE: After adding or deleting the HA pair, perform a configuration collection on the primary and secondary arrays. Procedure 1. From the HPE XP7 Performance Advisor menu, click High Availability. The table displays the primary side PVOL pair details and secondary side SVOL details in a single row. Also, you can filter the information on PVOL and SVOL Pair status. 2. To view the details of a pair, click the row. The performance information appears in a separate table under the Primary Side and the Secondary Side. The information about the connectivity details of the primary array to the secondary array appears in the separate table. 3. To plot performance charts for a component, click a metric of the component. The chart is displayed in the Chart Work area for the default duration. Use Preset or Custom option to change the duration. 4. To add alerts, save the chart, or send the chart in an , right-click the chart. View High Availability information 259

260 Manage PA database About PA database PA uses PostgreSQL as its database. When PA is installed, all the database related files are also automatically installed on the management station. PA v6.4.1 PostgreSQL database co-exists with the Oracle database of prior supported Performance Advisor releases. You can manage PA database in the following ways: Configure the database size manually, where you allocate the available disk space on your management station to PA database. For more information, see Manually configure database size. Purge data that belongs to an array, or data older than the specified date. Purge data automatically, where PA will purge the oldest data in its database to accommodate new data. Archive data from PA database. Import data into PA database. Generate or schedule Export DB reports, and also view records for the Export DB files. For more information, see About Export Database. WARNING: The PA database is a stand-alone database designed to run only with PA and not with any other application. Therefore, only supported and documented PA utilities, tools, and integrated features such as configuring, purging, archiving, and backing up and restoring data must be used to maintain it. Standard PostgreSQL database management tools and facilities must not be used to manage or monitor the database. IMPORTANT: You must log on to PA as an Administrator or a user with administrator privileges to configure, purge, archive, or import the PA database. You must also have this privilege to view or delete Export DB schedules. After PA is installed, HPE recommends that you set the maximum database size based on your available disk space. Calculating database growth per day in PA 1. To calculate the number of days that have passed from the day of first collection, use the following formula: days = (CUR_TIME - MIN_START_TIME) / MILLISECS_TO_DAY, where: 260 Manage PA database

261 MIN_START_TIME: The time (in milliseconds) of the first collected data point in the performance advisor environment. CUR_TIME: The current time(in milliseconds) of the System on which Performance advisor is installed. MILLISECONDS_TO_DAY: 1000*60*60*24 2. To calculate the contribution of current DB size of each array from the day of first collection, use the following formula: GBTillDatePerarray = (DATA_POINTS/ TOTAL_DATA_POINTS)* Current Database size, where: DATA_POINTS: The number of data points per day per array. TOTAL_DATA_POINTS: The sum of data points per day for all the arrays. 3. To calculate the database growth per day in MB per array, use the following formula: MBperDayperarray = (GBTillDatePerarray / days) * GIGABYTE_TO_MEGABYTE_CONVERSION, where: GIGABYTE_TO_MEGABYTE_CONVERSION: 1024 About Purge You can manually purge configuration and performance data for an XP or an P9500/XP7 disk array, or data older than the current specified date. PA can automatically purge data. Initially, when PA Is installed and performance data collection is not yet scheduled, the following message is displayed: Important: Either collections are not being performed or the database size has not yet reached 5GB. Performance collection must be initiated and the database size must be at least 5GB for Auto Purge Forecasting. After the performance data collection is initiated and the database size increases to 5 GB, PA displays a forecast message on the Purge pane. The forecast message indicates the approximate duration by when auto purge starts if the current data collection trend continues. When you purge data for an XP or an XP7 disk array, the corresponding configuration and performance data are permanently removed from the database. When you purge data for a specified duration, only the performance data collected for the XP and the XP7 disk arrays during that duration is permanently removed from the database. You can continue to collect performance data for those arrays, as their configuration data still exists in the database. Purging data eventually increases the performance of PA, as considerable amount of disk space used by the database is released back. About Purge 261

262 CAUTION: The data that is purged cannot be recovered. It is permanently deleted from the PA database. Hence, purge data only when you are absolutely sure that the data is no longer required. Also, PA activities, such as plotting charts and collecting data might be impacted when either the manual or auto purge is in progress. Alternatively, if you want to archive data before purging it, use the archival export functionality. For more information, see Archive data on page 265. IMPORTANT: The current date and time on your management station (where PA is installed) is considered for deleting the records. About automated data purge PA automatically purges performance data that belongs to XP and XP7 disk arrays, if either of the following conditions are met: The database size has reached x% of configured maximum database size, where x is the threshold value specified in purgeparameters.properties file. By default the threshold value is 70%. When the available disk space is y GB or lesser than y GB, where y is the disk space value specified in purgeparameters.properties file. Before reaching above mentioned conditions, the following predictions on when auto purge starts is displayed on the Purge/Archive screen: Important: Based on available disk space, in 7 day(s), auto-purging will begin.. Before reaching above mentioned conditions, the following warning messages will be displayed on the Purge/Archive screen and an event will be logged which can be viewed in the event log screen. Available disk space is less than X GB. Auto deletion operation will begin once the available disk space is less than Y GB. X signifies the disk space that needs to be considered for warning before auto purge operation begins and Y signifies the minimum disk space. Current database size has reached X % of Maximum configured database size. Auto deletion operation will begin once the current database size has reached Y % of Maximum configured database size. X is the warning threshold and Y is the threshold for auto purge. These warning parameters can be configured in purgeparameters.properties file. The purgeparameters.properties file enables you to configure the purge operations. This file is located in the %XPPA HOME%\HPSS\pa\properties folder. You can configure the following parameters in this file: Threshold_Value This value is considered for auto purge operation. The default value is 70%. The auto purge operation begins if the database size exceeds 70% of configured maximum database size. However, allowed value is between 70 to 85. If a valid value is entered, then the new value is considered for triggering auto purge operation. If an invalid value is entered, then default value of 70% is considered as threshold value. Disk_Space_Value The minimum default value is 3 GB. If the free disk space becomes less than 3GB, auto purge operation begins. However, a valid value between 3 to 5 can be entered. On entering invalid value, default value of 3 GB is considered. Partition_Days This value signifies the number of days of data a partition can contain in a table. The default value is 3. Each component table in a database can be partitioned in terms of 3 days of data. 262 Manage PA database

263 However, you can enter any value between 1 to 7. If an invalid value is entered, then the default value of 3 is considered. NOTE: If a lesser value is entered, then during the auto purge operation, the amount of space reclaimed would be smaller and execution probability of purge operation increases. Similarly, if an higher value is entered, during auto purge operation, the amount of space reclaimed is higher. Disk_Space_For_warning This value in GB must be considered for warning before auto purge operation begins. The default value is 10 GB. A warning message will be displayed in Purge/Archive screen and in event log screen once the available disk space becomes lesser than 10GB. This serves as a warning before the auto purge operation begins. This value must be less than the Disk_Space_Value. Threshold_For_Warning This value signifies the threshold in percentage. It is considered for warning before auto purge operation begins. The default value is 60%. If the database size reaches 60% of configured maximum database size, a warning message is displayed in Purge/Archive screen and in Event log screen. This serves as a warning before the auto purge operation begins. This value must always be lesser than the Threshold_Value. NOTE: Auto purge deletes oldest records in the PA database and retains minimum data (minimum of one partition of data) for each component in the array. The following table describes the alert messages related to the Delete and Shrink operations that are logged on the Event log screen: Alert Title Available disk space is less than X GB. Auto deletion operation will begin once the available disk space is less than Y GB Current database size has reached X% of Maximum configured database size. Auto deletion operation will begin once the current database size has reached Y % of Maximum configured database size Auto deletion operation is in progress Auto deletion operation completed successfully Since there is minimal data for the array X, it will not be considered for auto deletion. To decommission the array please use Purge by Array Alert Description X is the specified value for Disk_Space_For_Warning and Y is the specified value for Disk_Space_Value in purgeparameter.properties file. The warning message is displayed if the available disk space has reached X GB. Auto Purge operation begins when the database size reaches Y GB. X is the specified value for Threshold_For_Warning and Y is the specified value for Threshold_Value in purgeparameter.properties file. The warning message is displayed if the database size reaches X% of configured Maximum Database size. The Auto Purge operation begins when the database size reaches Y% of configured Maximum Database size. Auto Purge operation is in progress. Auto Purge operation is complete. An array X has minimum data (minimum of one partition of data) for reference, Deleting this array results in complete loss of performance data for that array. Hence, it is not considered for deletion during Auto Purge operation. If the array is no longer monitored, consider Purge by Array option to decommission the array. Manage PA database 263

264 Manually purge data Procedure 1. From the HPE XP7 Performance Advisor main menu, click Purge/Archive. 2. In the Purge pane, click Manual Purge. 3. In the Purge page, choose the type of purge. If you want to entirely purge the data of an array, then select Purge by array, and from the Array list, select the array. 4. If you select the Purge by date option, the Purge data till field appears. Choose the date and time details from the calendar, click Done, and then click Purge. If you do not specify a date and time in the Purge data till field, the current date and time are considered for the purge. The date and time are to be specified only when you select to purge data by date. 5. In the confirmation page, click OK to delete all the configuration and performance data collected in the PA database for the chosen period of time. If you want to view the specified array details again, request update from the host connected to that array. Then, perform a configuration collection followed by performance collection for that array. For more information, see Request host agent updates. About Archive You can archive performance data for XP/P9500/XP7 disk arrays from the PA, for a specified duration of your choice. The performance data is exported as.dump and.csv files that are saved in the DBArchiveDump folder at the following location on the management station: <XPPA_HOME>\pa \tomcat\webapps\pa\dbarchivedump\. CAUTION: Do not modify the names of the.dump and.csv files created during the export activity. HPE XP7 Performance Advisor uses these file names as reference to identify the data that needs to be imported. Do not modify the default settings that is configured for the HPE XP7 Performance Advisor database at the time of installation or upgrade. The data archival process must not be initiated when the auto purge is in progress. The manual purge must not be initiated when the data archival is in progress. 264 Manually purge data

265 IMPORTANT: After the data is archived, it is permanently deleted from the PA database and the free disk space is released back to the database. If you want to use the archived data for an XP or an XP7 disk array, import the corresponding dump folder. Before importing data to XP or XP7 disk array on the management station, ensure configuration collection is performed for a particular array, or perform a fresh configuration data collection. If you want to take a backup of the PA database before archiving, use the Backup utility. For more information, see Migrate data to another management station. You can import the archived data on to the same management station or another management station. However, ensure that the version of PA installed on the target management station is same as that installed on the source management station, from where the data is exported. For more information on importing data, see About importing data. If you retain the current date and time for archiving the data, and the last collection date and time is before the current date and time, PA considers the last collection date and time for archiving the data. For example, if :00:00 is the current date and time, and :00:00 is the last collection time stamp, PA considers :00:00 for archiving data. Archive data Procedure 1. From the HPE XP7 Performance Advisor main menu, click Purge/Archive. 2. In the Archive pane, click Export. 3. In the Export page, select the XP/XP7 disk array serial number from the Array drop-down list. The Last Collection Cycle displays the date and time when the performance data was last collected for the selected XP/XP7 disk array. 4. In the Export data till field, select the date from the calendar window, and set the time by dragging the Hour and Minute controls. If you do not specify a date and time, the current management station date and time is considered for archiving the data. 5. Click Done to close the calendar window, and then click Ok. 6. In the confirmation page, click OK to initiate export for the selected array. Archive data 265

266 PA archives data for the specified duration. As part of the archival process, PA does the following: a. Logs one record under Export data for the date and time when the archival is complete. b. Creates a folder in DBArchiveDump folder and displays the folder name under File Name. This folder contains all the.dump and.csv files that are created during export. The folder names are unique with the XP/XP7 disk array serial number in the file name for easy identification. Following is the folder name convention generated for XP and XP7 disk arrays: PA<array_serial_number>_<archival_start_date>_<archival_start_time>_<Star t_collection_interval_timestamp>_<end_collection_interval_timestamp> For example, PA53036_20Oct2016_ _ _ for an XP array. PA10055XP7_21Oct2016_ _ _ for an XP7 array. Once the archival is completed, this folder is simultaneously displayed in the Archive Export/Import tab. You must select this file if you want to import the performance data for XP/XP7 disk array. TIP: You can also copy the dump folder from the DBArchiveDump folder to a CD/DVD and release the space occupied by the dump folder on the management station. Migrate data to another management station If you are moving data from an existing management station to a new management station, use the Backup utility to migrate PA settings and preferences. Use this tool to preserve the data and configuration preferences by saving the existing settings and restoring them on the new management station. Migrate or backup the PA database and settings based on the following options: IMPORTANT: You must import the data to the same version of the management station as that of the installed PA. CAUTION: HPE Strongly insists not to manually copy or use the drag and drop feature to move the PADB folder to the target management station, or another location on the source management station. This action will result in irrevocable loss of data. Use only the Backup utility provided by PA to migrate data. IMPORTANT: To use the Backup utility, ensure that the same version of PA version is installed on both the backed up PA and also on the restoring PA. If the PA versions mismatch, Version incompatible error message is displayed. Do not modify the default settings that is configured for the PA database at the time of installation or upgrade. If you have already configured the serverparameters.properties file on the target management station, it will be replaced with the serverparameters.properties file that you backed up from the source management station. After you restore the database, the PA Tomcat service is automatically restarted to reflect the latest settings. 266 Migrate data to another management station

267 XP/9500 Disk Array Select this option to migrate the configuration and performance data that belongs to an array. Provide the 5 digit serial number of the array in the adjacent box. The data available for the specified array is backed up into the following.dump files: pa<xp_disk_array>_exp.dump for XP disk arrays and pa<xp7_disk_array>xp7_exp.dump for XP7 disk arrays old_xpslperf_exp.dump up to xpslperf_exp.dump Where, <XP or XP7_Disk_Array> refers to the array for which the data is exported. Time Select this option to migrate the configuration and performance data from the PA database for a duration that you want. Enter the duration (format DD-MM-YYYY) in the Start Date and End Date boxes. The data available for all the arrays during the specified duration is backed up into the following.dump files: pa<xp_disk_array>_exp1.dump for XP disk arrays. pa<xp_disk_array>_exp2.dump for XP disk arrays. pa<xp7_disk_array>xp7_exp1.dump for XP7 disk arrays. pa<xp7_disk_array>xp7_exp2.dump for XP7 disk arrays. old_xpslperf_exp.dump up to xpslperf_exp.dump All Select this option to migrate the configuration and performance data from the PA database into the following.dump files: pa<xp_disk_array>_exp.dump for XP disk arrays and pa<xp7_disk_array>xp7_exp.dump for XP7 disk arrays. old_xpslperf_exp.dump up to xpslperf_exp.dump Space requirements Before taking a backup of the database, make a note of the Current Database Size under the Purge tab. While restoring the database, ensure that the total available space on the disk where the database is already installed is more than the backed up database. If the database is installed on C:\HPE\HPSS \padb, the total available free disk space on C: must be greater than the size of the database that be restored. Before restoring the database, increase the Configured Maximum Database Size of the target database by a value equal to the sum of the current target database size + size of the database that is to be restored. If the current database size is 5 GB and the size of the database to be restored is GB, change the Configured Maximum Database Size under the DB Configuration/Purge tab to a size greater than 5 GB GB, which is GB. So, increase the database size to 18 GB. It is to avoid the automatic purging of data from the target Space requirements 267

268 management station database. For more information on auto purge, see About automated data purge. Migrate data using the Backup utility IMPORTANT: You must not stop the PA and the database services while backing up or restoring data on a management station. The data backup cannot be initiated for a single day. 1. Click Start > Programs > HPE XP7 Performance Advisor > Backup Utility. The Backup Utility window appears displaying the following options: DKC Time All 2. Based on your requirement, select one of the following options: DKC: Provide the 5 digit serial number of the XP/XP7 disk array, for which you want to take the data backup Time: Provide the duration for which you want to take data backup in the DD-MM-YYYY format. All: Clicking this option will initiate a backup of the complete HPE XP7 Performance Advisor database 3. Click Backup. The Backup option is enabled only when you select one of the above-mentioned backup options. The Open File dialog box is displayed. 4. Choose a location, such as a network drive or shared file system to save the backed up data (.dump files). A confirmation dialog box is displayed. 5. Click Open. 6. Click Yes to proceed. The Backup status window is displayed. Restoring backed up data using the Backup utility 1. Click Start > Programs > HPE XP7 Performance Advisor > Backup Utility. The Backup Utility window is displayed. 2. Based on the kind of backup done, select the appropriate backup option from the list displayed: DKC Time All 268 Migrate data using the Backup utility

269 IMPORTANT: You must select the same backup option that you had previously selected for taking backup of data. For example, if you have backed up data for a specific DKC ID, you must select DKC from the list of backup options while restoring the data. Selecting a different option, such as Time or All results in an error and data restore will not proceed. 3. Click Restore. The Open File dialog box is displayed. 4. Navigate to the folder where the.dump files are located and click Open.A confirmation dialog box appears. 5. Click Yes to proceed. The Restore Progress status window is displayed. Also, the details of the data being restored is displayed in the command prompt window. NOTE: Even after the data is restored completely, there are a few PostgreSQL related messages that are shown in the restore log files. Ignore those messages. The backing up and restoring of data cannot happen simultaneously. So, click Reset after a backup or a restore operation to disable the respective options and settings on the Backup Utility window. Save or restore data from the Windows command line To save your files, enter: %XPPA_HOME%\bin\backuputility -backup target-path Where, target-path is the location, such as a network drive or shared file system where you want to save the backup files. You can also backup data for an XP or an XP7 disk array DKC or a particular duration. The following are the commands: %XPPA_HOME%\bin\backuputility -backup target-path dkc <DKC_Serial_Number> %XPPA_HOME%\bin\backuputility -backup target-path time <Start date and time> <End date and time> The format for the start and end date, and time is as follows: DD-MM-YYYY NOTE: If you have saved the PA database on a different location during installation, navigate to that location. The target-path that you specify must not include space in the file location path. To restore your files, enter: %XPPA_HOME%\bin\backuputility -restore target-path Where, target-path is the location where you want to restore the files.you can also restore data for an XP or an XP7 disk array DKC, or a particular duration. The following are the commands: Save or restore data from the Windows command line 269

270 %XPPA_HOME%\bin\backuputility -restore target-path dkc <DKC_Serial_Number> %XPPA_HOME%\bin\backuputility -restore target-path time <Start date and time> <End date and time> The format for the start and end date, and time is as follows: DD-MM-YYYY About Importing data You can import the archived data to another management station or back to the same management station from where the data was initially exported. For more information, see Import archived data to the same management station on page 271, Import archived data to another management station on page 270 CAUTION: You must import the data to the same version of the management station as that of the installed PA. The import operation fails, if there is not enough free space in the database to accommodate the imported data. To start an import operation, PA requires that there be sufficient space in the database, at least matching the size of the exported data. You must either archive or purge some of the existing data before you begin the import operation. If you are importing data when the auto purge is in progress, the data import activity still continues. However, if the imported data happens to be among the oldest data that PA has selected for purging, the data being imported is also purged automatically. Hence, it is highly recommended that there be enough disk space available on that management station, where the data import activity is initiated. Do not move the dump folder from <XPPA_HOME>\pa\tomcat\webapps\pa \DBArchiveDump\ or copy the dump folder to any other location, because PA accesses these files only from <XPPA_HOME>\pa\tomcat\webapps\pa\DBArchiveDump\ on your management station. Import archived data to another management station Procedure 1. Copy the dump folder for an XP or an XP7 disk array from the source management station to the following location on the target management station: <XPPA_HOME>\pa\tomcat\webapps\pa\DBArchiveDump\ 2. Follow steps 1 5 provided for importing data to the same management station. For more information, see Import archived data to the same management station on page 271. The above-mentioned procedure is also applicable, if you are accessing the target management station over the web (http(s)://[server name].[domain name]/pa). 270 About Importing data

271 Import archived data to the same management station IMPORTANT: After importing performance data for an array, ensure to initiate a fresh configuration data collection for that array on the target management station, as the archival process only exports the performance data. For example, if you import performance data for a disk array that is not currently monitored by PA on the target management station, you cannot view its performance data until a configuration collection is performed for that array. When DB is exported using the PA Backup Utility, the entire performance data can still be viewed even if the array is not managed by the MS whereas when data is imported using Archive Import from PA GUI, the performance data for the array cannot be viewed until and unless a configuration collection is issued for the array. If you initiate data import onto a target management station for overlapping date range and data already exists in the management station's database, you are prompted to either archive or purge the existing data and initiate the import process again. Consider the following example: Performance data is archived twice for an array. The first set of performance data is archived for the date range, :10: TO :10: The second set of performance data archived for the date range, :10: TO :10: Then, the first set of data is imported to a target management station. When you try to import the second set of data to the same target management station, PA prompts you to either archive or purge the existing data, and then import the second set of data again. This is because, the performance data already exists in the PA database for the date range, :10: TO :10: So, first archive or purge the existing performance data for the date range, :10: TO :10:00.109, and then import the second set of performance data again. After data is imported onto the target management station, you can only use the remaining TB- Days of Meter based Term license that are available after the data is exported from the source management station. For example, if you installed 100TB-Days of Meter based Term license on the source management station to monitor an additional usable capacity of 10TB for 10 days and 80TB- Days are used before you exported the data, only 20TB-Days are available when data is imported onto the target management station. Procedure 1. From the HPE XP7 Performance Advisor main menu, select Purge/Archive. 2. In the Archive pane, click Import. 3. Based on the array for which you want to import its performance data, select the relevant file from the Select the.dmp file drop-down list, and then click Ok. 4. In the confirmation page, click OK to initiate import of the archived data. Click Import. Based on whether the import is for an array, PA does the following: Import archived data to the same management station 271

272 a. Displays an informational message that the import for the selected array is successfully initiated. b. Imports performance data from the dump folder for arrays in the following format: PA<array_serial_number> _<archival_start_date> _<archival_start_time> _<Start_collection_interval_timestamp> _<End_collection_interval_timestamp> OR PA<array_serial_numberXP7> _<archival_start_date> _<archival_start_time> _<Start_collection_interval_timestamp> _<End_collection_interval_timestamp> <PAarray_serial_number> _<archival_start_date> _<archival_start_time> _<Start_collection_interval_timestamp> _<End_collection_interval_timestamp>.DMP c. Logs one record under Import details for the date and time when the import is complete. d. Displays the names of the dump folder that is imported, under Filename. 5. Perform a fresh configuration data collection for the array on the management station, where you have imported the performance data. For more information on performing configuration data collection, see About performance data collections on page 59. Export Database About Export Database PA retrieves performance values related to the DKC, LDEVs, ports, and the CLPRs for XP and XP7 disk arrays, and provides the data in separate.csv files. You can also view the performance values of journal pool LDEVs and the utilization values for Ext-LUNs and RAID groups. If you export data for an XP7 disk array, you can also view the average utilization percentage of an MP blade and the LDEV that is currently assigned to the MP blade. You can export data from the.csv files to a data visualization program, such as Microsoft Excel. For more information about the generated.csv files, see Export DB CSV files on page 274. The.csv files are created when you export data for a specified duration or schedule it as a daily, weekly, or monthly activity. The.csv files are stored in the following location on the management station: \HPSS \pa\tomcat\webapps\pa\reports. You can perform the following tasks from the Export DB screen: 272 Export Database

273 Export performance and utilization data into.csv files. You can save and view the.csv files when required, or schedule the export activities on a periodic basis. View.csv files by selecting corresponding records. Delete.csv file records and delete export activity schedules. The Export Database report functionality provided through the PA GUI is same as that available from CLUI. NOTE: To reduce overall time taken to export performance data, the data access request has been optimized. The amount of data fetched through Export DB functionality can be controlled through TimeIntervalForExportDB parameter in serverparameters.properties file. This file is located in the \HPSS\pa\properties folder. You can configure the following parameters in this file: TimeIntervalForExportDB The value (in seconds) set in this parameter indicates the amount of data per request to export. The default value is (6hrs). For example, if you export data for 1 day, 4 requests each for 6hrs of performance data are sent to server. Export DB screen details Screen elements File Name Array Name Report Type User Name File Type Generate Time Start Date End Date Destination Description Displays the file name. PA appends the name that you provide to the file names of all the.csv files that it generates. Displays the array name. Displays the time when the report was created. Displays the name of the user who created the report. If you logged in to PA as an Administrator and created a report, the user name is displayed as Administrator. Displays the file type. Displays the time taken to generate the export DB csv file. Displays the start and the end time if it is a one-time export activity. Displays the start and the end time if it is a one-time export activity. Displays the address to which the export csv file must be sent. Scheduled Exported DB tasks Table Continued Export DB screen details 273

274 Screen elements Occurrence Schedule Time Description Displays the occurrence of a scheduled export DB task. Displays the schedule time for an export DB activity. IMPORTANT: The.csv records for which an asterisk (*) is displayed before the User Name indicates that they are generated through a schedule. The naming convention for the.csv record that have an associated schedule is: <resource type>_exportdb-<array serial number>_<array Serial Number>_<Report Type>_<Schedule Type>_<Date>_<Time>.csv. Following is the file naming convention for the.csv records that are created using the Collect now option: <resource type>_exportdb-<array serial number>_<report name>. Export DB CSV files Following are the.csv files you can view when you save or schedule an Export DB report: ldev_exportdb-array_serial_number_<file_name>.csv This file includes the following details: The XP/XP7 disk array serial number for which the report is generated. The LDEVs present during the specified duration. The RAID Groups to which the LDEVs belong. The performance data collection interval time stamps. The data for the following metrics: RIO Read Cache Hits, RIO Reads, RIO Write Cache Hits, and RIO Writes. SIO Read Cache Hits, SIO Reads, SIO Write Cache Hits, and SIO Writes. Total IO, Inhibit Mode IO Count, and Bypass Mode IO Count. Backend Transfer Sequential Reads, Backend Transfer Non-Sequential Reads, and Backend Transfer Writes. Random MB Reads and Random MB Writes. Sequential MB Reads and Sequential MB Writes. Average Read Response Time (msec) data for every 200 msec and Maximum Read Response Time (msec) data for the last 30 seconds. Average Write Response Time (msec) data for every 200 msec and Maximum Write Response Time (msec) data for the last 30 seconds. The associated E-Port list, which is a list of Ext-Lun initiator ports that are used to connect E-Port(s) to an external array. The associated E-Seq, which is the Ext-Lun provider's serial number for the E-seq(s) array. 274 Export DB CSV files

275 The associated E-LDEV, which is the external LUN LDEV ID on the external array. The associated CLPR group ID. IMPORTANT: For a P9500/XP7 disk array, the ldev_exportdb file displays an additional Current MP column. This column displays the current MP blade for each LDEV record. The MP blade ID includes the cluster # and the blade location for the MP blade. dkc_exportdb-array_serial_number_<file_name>.csv This file includes the following details: The XP/XP7 disk array serial number for which the report is generated. The array type to which the selected XP or XP7 disk array serial number belongs. The performance data collection interval time stamps. The size of the cache and the MB/s of cache used over an entire collection interval. The percentage of writes that are held in the cache, yet to be transferred to the disks over an entire collection interval. The MB/s of the continuous access asynchronous sidefile usage over an entire collection interval. The data accessed or the reads on a single CLPR over an entire collection interval. The utilization of the shared memory CHIP/CHA and ACP/DKA transfer bus, and the utilization of the cache memory CHIP/CHA and ACP/DKA transfer bus. For an XP disk array, the individual MP processor utilization data for the CHIPs/CHAs and the ACPs/ DKAs are also displayed and the above-mentioned information. For an XP7 disk array, the dkc_exportdb does not include the CHIP/CHA MPs and the ACP/DKA MPs utilization data. Instead, the MP blades that reside on the XP7 disk array and their average utilization percentage are displayed. IMPORTANT: Since, the CHIP/CHA and the ACP/DKA MPs are moved to the MP blades in the XP7 disk arrays, their MP utilization metrics are not applicable for the XP7 disk arrays. port_exportdb-array_serial_number_<file_name>.csv This file includes the following details: The XP/XP7 disk array serial number for which the report is generated. The port IDs on the XP/XP7 disk array. The port type, such as Fibre or FCoE (applicable only for XP7 disk arrays) is also displayed beside the CHIP port ID. The performance data collection interval time stamps. The maximum and the minimum frontend I/Os on a port over an entire collection interval. The average frontend I/Os on a port over an entire collection interval. The maximum and minimum frontend throughput in MB/s that was read from or written to the port in the last 30 seconds of the collection interval. The average frontend throughput in MB/s that was read from or written to the port over an entire collection interval. Manage PA database 275

276 The type of the port, such as the Fibre (Target), Fibre (Cont Acc Target), Fibre (Cont Acc Initiator), or the Fibre (Ext-Lun Initiator). The associated E-Seq, which is the Ext-Lun provider's serial number for the E-seq(s) array. The number of read, write, and total frontend IO/s on a port over an entire collection interval. The frontend throughput in MB/s that was read from and written to the port over an entire collection interval. The read response time is the average time taken to read the data from the port over an entire collection interval. The write response time is the average time taken to write the data to the port over an entire collection interval. The total response time is the average time taken for the data to read from and write to the port over an entire collection interval. clpr_exportdb-array_serial_number_<file_name>.csv This file includes the following details: The XP/XP7 disk array serial number for which the report is generated. The CLPR IDs on the XP/XP7 disk array. The performance data collection interval time stamps. The MB/s of cache used over an entire collection interval. The percentage of the writes that are held in the cache, yet to be transferred to the disks over an entire collection interval. The MB/s of the continuous access asynchronous sidefile usage over an entire collection interval. The data accessed or the reads on a single CLPR over an entire collection interval. mp_exportdb-array_serial_number_<file_name>.csv IMPORTANT: This file is created only when you save or schedule the Export DB report for an P9500/XP7 disk array. This file includes the following details: The P9500/XP7 disk array serial number for which the report is generated. The MP blade IDs, the cluster # and the blade locations for the MP blades. The average percentage of utilization over the entire collection interval. It is calculated as the utilization of all the individual processors in the MP blade. The performance data collection interval time stamps. The processors on an MP blade and their utilization percentage over the entire collection interval. The processing types and the respective MP busy time. The MP busy time indicates the time taken by an MP blade to process the request it receives from the associated processing type. rgutil_exportdb-array_serial_number_<file_name>.csv 276 Manage PA database

277 This file includes the following details: The XP/XP7 disk array serial number for which the report is generated. The RAID Group IDs on the XP/XP7 disk array. The performance data collection interval time stamps. The frontend random read and write I/Os on all the LDEVs in a RAID Group over the collection interval. The utilization of the RAID Group over the collection interval for writing random and sequential parity. The frontend sequential read and write I/Os on all the LDEVs in a RAID Group over the collection interval. The total utilization of a RAID Group over a collection interval. When a RAID Group is associated with a ThP pool, this metric provides the extent of RAID Group utilization due to the I/Os occurring on a ThP pool. jnl_exportdb-array_serial_number_<file_name>.csv This file includes the following details: The XP/XP7 disk array serial number for which the report is generated. The journal group IDs on the XP/XP7 disk array. The performance data collection interval time stamps. The MU indicates the mirror unit number. The CTG indicates the consistency group Id. The JNLS indicates the journal pool status. The AP indicates the number of active paths. The U(%) indicates the usage rate (%) of the journal data. The Q-Marker indicates the latest sequence # for writing to the PVOLs consistency group at the PAIR state. The Q-CNT shows the number of remaining Q-Markers within the journal data. The Num indicates the total number of LDEVs configured as the journal volumes. The LDEV # indicates the cu:ldev ID that is configured as the journal volume. hstgrp_exportdb -array_serial_number_<file_name>.csv This file includes the following details: The XP/XP7 disk array serial number for which the report is generated. The performance data collection interval time stamps. Host group name. Backend Tracks for host group. IOPS is the sum of IOPS on all the LDEVs that are mapped to a Host Group for a collection interval. MBPS is the sum of MBPS of all the LDEVs that are mapped to a Host Group for a collection interval. Manage PA database 277

278 The Average Read Response Time is the average read response time of all the LDEVs that are mapped to a Host Group for a collection interval. The Average Write Response Time is the average write response time of all the LDEVs that are mapped to a Host Group for a collection interval. The Max Read Response Time is the highest read response time of any LDEV that is mapped to a Host Group for a collection interval. The Max Write Response Time is the highest write response time of any LDEV that is mapped to a Host Group for a collection interval. pool_exportdb-array_serial_number_<file_name>.csv This file includes the following details: The Pool ID and corresponding pool type on the XP/XP7 disk array. The performance data collection interval time stamps. The Pool IO/s is a sum of IO/s that is happening on a virtual volume in a pool. The Pool MB/s is a sum of MB/s that is happening on a virtual volume in a pool. The Pool Backend Tracks is a sum of the Backend Tracks that are happening on a virtual volume in a pool. Create Export DB CSV files Procedure 1. Click the HPE XP7 Performance Advisor main menu, and click Export DB. IMPORTANT: If you have logged in with user privileges, you cannot schedule the export DB activity version is supported for both XP and XP7 disk arrays is also the supported version if you want to view the external LUN information. Unlike the database purge and archival procedures, the Export DB activity does not affect the database size. It only exports data from the database into the.csv files. Performance collection must be complete to export database files. 2. In the Export DB page, navigate to Actions > Create. 3. In the Export DB/Schedule Export DB page, based on your requirement, select the Collection Period as Collect now or Schedule. If you select the Collection Period as Collect now, proceed to step 4. If you select the Collection Period as Schedule, the following schedule options are enabled: Collection Schedule: Displays Daily, Weekly, and Monthly. 278 Create Export DB CSV files

279 Collection Schedule Weekly Description By default, Weekly is selected as the collection schedule. The corresponding Day of the Week list displays the weekdays. If you want to configure a weekly schedule: Select the weekday when you want the schedule to be executed. Select the time (hour : mins) when you want the schedule to be executed, from the Start Time lists. Specify the number of times the schedule should repeat, in the No. of Occurrences box. The data for a duration of one week prior to the scheduled time is exported. Monthly Clicking the Monthly collection schedule displays the Monthly Schedule. The following options are provided in a monthly schedule: Based on Date, where you select a date in a month. The Date of the Month list is enabled when you select Based on Date. Based on Day, where you select a day in a week. The Day of the Week and Week of the Month lists are enabled when you select Based on Day. Select a day and the corresponding week in a month to execute the schedule. If you want to configure a monthly schedule: Select the date of the month when you want the schedule to be executed. OR Select the day and the corresponding week when you want the schedule to be executed. Select the time (hour : mins) when you want the schedule to be executed, from the Start Time lists. Specify the number of times the schedule should repeat, in the No. of Occurrences box. The data for a duration of one month prior to the scheduled time is exported. Daily Clicking the Daily collection schedule displays the start time. If you want to configure a daily schedule: Select the time (hour : mins) when you want the schedule to be executed, from the Start Time lists. Specify the number of times the schedule should repeat, in the No. of Occurrences box. Manage PA database 279

280 Collection Schedule Description The previous day's data is exported. For example, if the export DB report is scheduled on 01/01/2017 at 10:00:00 hrs, the data is exported from 12/31/ :00:00 hrs to 01/01/ :00:00 hrs. 4. Provide a name in the File Name box. The name must have a minimum of two characters and can have a maximum of 80 characters. PA appends the name that you provide to the file names of all the.csv files that it generates. 5. Select the Start Time and End Time, if it is a one-time export activity. If you are scheduling the export activity, select only the start time. 6. From the Array list, select the XP/XP7 disk array for which you want to save or schedule the Export DB report. 7. Select the check box for Human Readable Format, if you want to view the data for LDEVs in the cu:ldev format. 8. Select the check box for Version Number, the Select Version Number option is enabled with as the default value. The following image shows scheduling the export DB activity for 10055, which belongs to the XP7 Disk Array type. 9. Select the check box for Response Time to view the following read-write response time for all the LDEVs. The Response Time check box is enabled only when you select the Version check box. Read: For LDEVs read response time Write: For LDEVs write response time All: For LDEVs read and write response time 10. Select the check box for the RG Utilization, if you want to view the percentage of utilization for the RAID Groups. This option can be used only when the Response Time check box is selected and the supported version is Select the check box for Display LDEV's of the Journal, if you want to view all the LDEVs that belong to a journal pool. 280 Manage PA database

281 12. If you are scheduling the export activity, retain the recipient address displayed in the box or specify an address where you want to receive the notifications. If you are saving the export DB report, the.csv files are available in the \HPSS\pa\tomcat\webapps\pa\reports folder. NOTE: By default, the notifications are sent to the recipient addresses specified on the Settings screen. The default recipient address is 13. After you select or fill in the options, click Save. Click Reset anytime before saving the Export DB report to clear the current selection and restore the default settings. Based on whether the export activity is for an XP or an XP7 disk array, PA does the following: 1. Creates the appropriate.csv files and appends the file name that you provided while configuring the export activity to the.csv file names. For more information, see Export DB CSV files on page 274. If you have chosen to view the RAID Groups utilization values or performance values of LDEVs in a journal pool, the respective.csv files are also created. The corresponding set of records are displayed in the Exported DB Files section, under the View Exported/Scheduled Exported DB Files tab. For more information, see View Export DB CSV files on page Displays the status of the export activity on the Event Log screen. If the export activity is successful, the following message is displayed: Data exported successfully into XXX_<filename>.csv If the export activity fails, the following message with severity set as User Action is displayed: Data cannot be exported into XXX_<filename>.csv. An export activity might fail if the performance data is not available for the specified duration. A separate notification about the failure is not sent to the recipients. Where, XX in the above log messages refers to the component for which the export is initiated, such as the DKC, LDEV, port, CLPR, or the MP Blade (applicable only for the XP7 disk arrays).if you have scheduled the export DB activity, in addition to the above-mentioned, you also receive an notification at the specified recipient addresses after the.csv files are created. However, the.csv files are not provided as attachments due to their large file size. Instead, every notification provides links that the recipients can click to view the respective.csv files. Import data to MS Excel Procedure 1. Open the export DB file in MS Excel from the location where it is saved (\HPSS\pa\tomcat \webapps\pa\reports). The Text Import wizard appears. 2. In step 1 of 3 of the Text Import wizard, select the Delimited option (default selection). 3. Enter 1 in Start import at row, and select Windows (ANSI) in the File origin list. 4. Click Next. 5. In step 2 of 3 of the Text Import wizard, select Comma, and clear any other delimiters if they are selected. Retain the default values of the other fields, and click Next. Import data to MS Excel 281

282 6. In step 3 of 3 of the Text Import wizard, highlight all the columns in the spreadsheet by pressing the Shift key while navigating to the last column using the scroll bar, and then click the last column. 7. Click Text in the Column data format panel, and then click Finish. The spreadsheet is populated with the PA data. 8. Do the following: Select the corner cell between cells A and 1. This highlights the entire spreadsheet. Go to Format on the menu bar, select Column, and then select AutoFit Selection. This sizes all the columns to fit the text. The MS Excel performance data sheet is now complete. a. Select the corner cell between cells A and 1. The entire spreadsheet is highlighted. b. Go to Format on the menu bar, select Column, and then select AutoFit Selection. The columns are adjusted to fit the text. The MS Excel performance data sheet is now complete. View Export DB CSV files Based on whether the export DB activity is for an XP or an XP7 disk array, PA creates the appropriate.csv files. For more information, see Export DB CSV files on page 274. The corresponding set of records for the Export DB report are displayed in the Exported DB Files section, under the View Exported/Scheduled Exported DB Files tab. If it is a scheduled export activity, the corresponding schedule details for the Export DB schedules are also displayed in the Scheduled Export DB tasks section, under the View Exported/Scheduled Exported DB Files tab. The following image shows the.csv files created for 10055, which belong to the XP7 Disk Array type. 282 View Export DB CSV files

283 IMPORTANT: The name of the user who created the report is displayed under User Name. If you logged in to PA as an Administrator and created the Export DB report, the user name is displayed as Administrator. The.csv records for which an asterisk (*) is displayed before the User Name indicates that they are generated through a schedule. The naming convention for the.csv record that have an associated schedule is: <resource type>_exportdb-<array serial number>_<array Serial Number>_<Report Type>_<Schedule Type>_<Date>_<Time>.csv. Following is the file naming convention for the.csv records that are created using the Collect now option: <resource type>_exportdb-<array serial number>_<report name>. The time when the report was created is displayed under Generation Time. 1. Click the HPE XP7 Performance Advisor main menu, and then click Export DB. 2. On the screen that appears, click the View Exported/Scheduled Exported DB files tab. 3. Select the check box for a.csv file in the Exported DB Files section and click View. The data in the.csv file is displayed in a new IE browser window. You can save a copy of the report by clicking File > Save or File > Save As on the browser menu. Delete Export DB reports and schedules Prerequisites You can delete a schedule record in the Scheduled Export DB tasks section, only if you have logged in to PA as an Administrator or a user with administrator privileges. Procedure 1. From the HPE XP7 Performance Advisor main menu, click Export DB. 2. On the screen that appears, click the View Exported/Scheduled Exported DB files tab. 3. To delete the Export DB report, select the check box for the corresponding.csv records in the Exported DB Files section and click Delete. To delete the Export DB report schedules, select the check box for the corresponding schedules in the Scheduled Exported DB tasks section and click Delete. Delete Export DB reports and schedules 283

284 Reports About Reports Reports provide history of performance data collected for a specified XP or an XP7 disk array, where a visual representation of the performance trend of components is shown for a duration that you specify. The performance data points are plotted for different metrics that help analyze the performance of an XP or an XP7 disk array. Reports can be a one-time activity, where you either generate and view a report, or save a copy of the report for later reference. You can also schedule reports on a periodic basis, where data is automatically provided in the corresponding report for the duration that you specify. If you generate a report, you can only view a temporary copy of the report. You cannot retrieve the report once it is closed. If you save a report, it is available in the following location: Local_drive:\HPSS\pa\tomcat \webapps\pa\export. By default, the Local_drive on the management station refers to the C drive, where the Windows operating system is installed and the HPSS folder is also copied to the C drive. IMPORTANT: If you have logged in as an Administrator or a user with administrator privileges, generate or save reports, or schedule reports periodically. If you have logged in with user privileges, you cannot schedule reports. You can only generate or save reports. About viewing reports 284 Reports

285 IMPORTANT: Report schedules with an asterisk (*) before the User Name indicate that they are generated by a schedule. Following is the naming convention for reports that have an associated schedule: <Array Serial Number>_<Report Type>_<Schedule Type>_<Date>_<Time>.html/pdf/rtf NOTE: Following is the naming convention for CHIP Utilization: <Array Serial Number>_<Report Type>_<Available CHIP's Type>_<Schedule Type>_<Date>_<Time>.html/pdf/rtf If the Dest for a report record is blank, it implies one of the following: The report is not a scheduled report. The report is scheduled, but an address is not provided or is invalid. If the address is not provided or is invalid, you will not receive any notification though the report is generated. You need to go to the following location and select the report you want to view: <Local_drive>:\%HPSS_HOME%\pa\tomcat\webapps\pa\reports. All the reports are available in this location. If the XP/XP7 disk array for which you have created a report is violating license, the following warning message is displayed at the beginning of the report: WARNING: License violation was detected for this array. This report may not capture performance data about the recent configuration changes made in the your <XP or XP7> disk array. Please purchase the required HPE XP7 Performance Advisor licenses immediately. About Schedule Reports The report schedules that you create appear in the Scheduled Reports pane (Reports > View Reports). Reports 285

286 IMPORTANT: The Scheduled Reports section appears only if you have logged in as an Administrator or a user with administrator privileges. You can generate, save, or schedule reports for the template charts. The reports contain the performance graphs for the combination of components and metrics saved in Templates screen. You can also select the duration for which you want to view the performance graphs of the components. If the Dest for a schedule record is blank, it implies that the report is scheduled, but an address is not provided or is invalid. In such cases, you do not receive any notification though the report is generated. You need to go to the following location and select the report you want to view: <Local_drive>:\%HPSS_HOME%\pa\tomcat\webapps\pa\reports. All the reports are available in this location. If a particular schedule is not repeatable (that is, the number of occurrences is set to 1), it is deleted from the PA database and also removed in the Scheduled Reports section after the report is generated. Only those schedules for which the number of occurrences is more than 1 are still displayed in the Scheduled Reports section. The schedules that have reached their end date are also deleted automatically. Types of reports Reports in PA provide a high level performance view of the XP and XP7 disk arrays, and the utilization of individual components in these disk arrays. Each report includes the 50th, 90th, and 95th percentile values as legends in the charts. For more information on percentile values, see View 50th, 90th and 95th percentile value in charts. Following are the different reports that you can view in PA. The Yes and No given under For XP disk arrays and For XP7 disk arrays columns indicate whether that particular report is displayed for the XP7 disk array: 286 Types of reports

287 Report types Description For XP disk arrays For P9500/XP7 disk arrays Array Performanc e The Array Performance report provides the overall array performance by measuring the total I/Os, read and write I/Os on that array. The Array Performance report comprises of the following reports: Total I/O Rate Total I/O Rate by hour of day Total I/O Rate Detail Read-Write Ratio Read-Write Ratio by hour of day Read-Write Detail Max/Min Frontend Port IOPS Yes The Findings section provides a brief summary on the status of the CHIPs, cache, ACP, and the LDEVs. Yes The Findings section provides a brief summary on the status of the cache, LDEVs, and the MP Blades. The utilization summary of the CHIP/CHA and ACP/DKA MPs are not displayed in the Array Performance report - Findings section. Max/Min Frontend Port MB/s In addition, it includes a section called Findings at the beginning of the report. ACP Utilization The ACP Utilization report provides data on the utilization of various installed ACP/DKA pairs for the duration that you specify. You can also view the ACP Utilization by Hour of the Day report that provides the utilization data for all the ACP/DKA pairs averaged over a 24-hour period. Yes No Table Continued Reports 287

288 Report types Description For XP disk arrays For P9500/XP7 disk arrays Cache Utilization The Cache Utilization report provides data on the following: Yes Yes Utilization of cache Percentage of pending writes Read hits as a percentage of total read operations Total number of transfers per second Total number of transfers over the past 24-hours Cache side file utilization for the continuous access asynchronous transfers CLPR MP Blade Write Pending Rate CLPR MP Blade Usage Rate CHIP Utilization The CHIP Utilization report provides data on the utilization of various installed CHIPs/CHAs for the duration that you specify. You can also view the CHIP/CHA utilization by the Hour of the Day report that provides the utilization data for all the CHIPs/CHAs averaged over a 24-hour period. Yes No LDEV IO The LDEV IO report provides data on the busiest LDEVs and the RAID Groups. Yes Yes Table Continued 288 Reports

289 Report types Description For XP disk arrays For P9500/XP7 disk arrays LDEV Activity The LDEV Activity report provides data on the average performance and the utilization of LDEVs. The following are the different metrics available for the LDEV Activity report: Yes Yes FrontEndIO BackEnd IO MB Utilization Read Response Time Write Response Time The LDEV data corresponding to each of the above metric is provided in a separate.csv file, based on the metric that you select. RAID Group Utilization The RAID Group Utilization report provides the top 32 RAID Groups, which is derived based on the extent of utilization of each RAID Group. It is available as standalone report and also as a part of the All report. Yes Yes Continuous Access Journals The Continuous Access Journals report provides data on Continuos Access metrics and Journals metrics. Yes Yes The first five charts provides the aggregate of all CA ports for selected metrics. Figures 6-10 provides information on individual CA Target and Initiator ports. Figure 10 to 15 provides data on Journals. Table Continued Reports 289

290 Report types Description For XP disk arrays For P9500/XP7 disk arrays ThP Pool Occupancy The ThP Pool Occupancy report provides data on the utilization percentage of the eight busiest ThP pools. It also includes pool performance metrics such as: Yes Yes Frontend IO per second Frontend MB per second Backend tracks Average read response time Average write response time Max read response time Max write response time MP Blade Utilization The MP Blade Utilization report provides data on the average utilization of each installed MP blade. In addition, the following are also included in the report for each MP blade: No Yes The top 20 consumers (LDEVs, continuous access journal groups, or E- LUNs) and their average utilization of the CPU cycles. The average MP blade utilization by each processing type. So, if there are four MP blades, the MP Blade Utilization report displays the utilization data and related charts in the following order: 1. Average utilization data for the first MP blade. 2. Average utilization by top 20 consumers for the first MP blade. 3. Average utilization by the processing types for the first MP blade. The above-mentioned sequence is repeated for the subsequent MP blades. Table Continued 290 Reports

291 Report types Description For XP disk arrays For P9500/XP7 disk arrays Host Group Performanc e The Host Group Performance Report provides the overall host group performance by measuring the total IOs for the host group. It comprises of the following reports: Yes Yes Total IO Total IO write Total Read Total MB Total MB Write Total MB read Average Read response time Average Write response time Maximum Read response time Maximum Write Response Time Two pie charts show the following metrics: 1. Total IO divided into random and sequential components for the following: Random Read Random Read Cache Random Write Sequential Read Sequential Read Cache Sequential Write 2. Total MB divided into random and sequential components Random Read Random Write Sequential Read Sequential Write Table Continued Reports 291

292 Report types Description For XP disk arrays For P9500/XP7 disk arrays Front End Port Utilization The frontend port utilization comprises of the following reports: Maximum Port IO Yes Yes Minimum Port IO Average Port IO Average Port MB Maximum Port MB Minimum Port MB All The All report consolidates and provides the above-mentioned reports in a single report for the selected date and time range: IMPORTANT: The LDEV Activity report is not included in the All report. Yes NOTE: The MP Blade utilization data is not applicable for the XP disk arrays. So, the MP Blade Utilization report is not included in the All report generated for the XP disk arrays. Yes NOTE: The ACP/DKA and the CHIP/CHA utilization data are not applicable for the XP7 disk arrays. So, their reports are not included in the All report generated for the XP7 disk arrays. To view sample reports for the above-mentioned report types, see Sample reports. IMPORTANT: Reports on the following are available only if they are configured in the selected XP or XP7 disk array. If not configured, they are not even displayed as options to select for create their reports. In addition, they are also not displayed in other related reports, like the Array Performance and the All reports. Journal Pool Utilization ThP Pool Occupancy Snapshot Pool Occupancy In the report that you create, PA plots the data for a maximum of eight components in each chart that is displayed in the report. For example, if you want to view the LDEV IO report for the 64 busiest LDEVs, PA provides a single report that includes eight charts. Each chart accommodates data for a maximum of eight LDEVs. 292 Reports

293 About RAID Group Utilization report The RAID Group Utilization report provides data on the most utilized RAID Groups in an XP or an XP7 disk array. The utilization of each RAID Group is derived based on the backend transfers addressed by the RAID Group and indicates the total utilization over an entire collection interval. You can view the report for 8-32 busiest RAID Groups and each chart in the report displays the utilization graphs for eight RAID Groups. To generate or schedule a RAID Group Utilization report, follow the procedure given for creating or scheduling a report in. The report displays the utilization graphs for only those RAID Groups that have managed the backend transfers. When a RAID Group is associated with a ThP pool, the extent of utilization due to I/Os occurring on a ThP pool is considered. About Continuous Access Journals report The Continuous Access Journals report provides data on seventeen metrics including the aggregation of Average IOPS, Average Throughput, Hourly Throughput, Daily Throughput and Weekly Throughput for all CA Initiator and Target ports. The same set of metrics are available for individual CA Initiator and Target ports in the next set of graphs. From the Journals screen, the report exports the RIO response time metric for all Journals, Async transfer rate for all Journals, Journal utilization and Recovery Point Objective (RPO), and Copy rate for all Journals. In addition, the following are also included in the report: Top 20 Pvol IOPS Top 20 Pvol Throughput NOTE: When you select the report type as All, the CAJ part of the data in the report includes only the journal pool utilization for top 8 journals. You can only generate CAJ report at one time, the multi-select option in the Report Type list is disabled to combine other report types. About LDEV IO report The LDEV IO report provides data on the busiest frontend and the backend LDEVs and RAID Groups on an XP or an XP7 disk array. It is based on the frontend I/Os and the backend transfers. You can view the report for busiest frontend and backend LDEVs, and 8-32 busiest frontend and backend RAID Groups. The port type, such as Fibre or FCoE (applicable only for P9500/XP7 disk arrays) is also displayed beside the port ID, which is associated with the particular LDEV. The selection is in multiples of eight and ranges from for the frontend and the backend LDEVs, and 8-32 for the frontend and the backend RAID Groups The RAID Groups and LDEVs selection is in multiples of eight. If you do not select any value from the respective drop-down lists, by default, the LDEV IO report is generated for the eight busiest frontend and eight backend LDEVs, and eight frontend and eight backend RAID Groups. Further, the report displays the graphs for only those LDEVs that have the associated I/Os and those RAID Groups on which the I/Os transactions have occurred. Consider the following example: A report is created to view 32 busiest frontend LDEVs and 16 busiest frontend RAID Groups, and only eight of the selected 32 LDEVs and four of the selected 16 RAID Groups are busy. PA generates the LDEV IO report where you can view the graphs for only the eight LDEVs and four RAID Groups on which the maximum I/O transactions have occurred. The graphs are not shown for the remaining LDEVs or the RAID Groups. The LDEV IO report also provides a link to the additional About RAID Group Utilization report 293

294 LDEV IO mapping information. The busiest LDEVs are displayed at different ranks in a tabular format. For more information, see Create an LDEV IO report. Create an LDEV IO report Procedure 1. Select LDEV IO from the Report Type list. 2. Select the LDEVs and the RAID Groups based on the Frontend I/Os and Backend transfers from the following lists: LDEVs from the FrontEnd LDEVs list LDEVs from the BackEnd LDEVs list Frontend RAID Groups from the RG(s) list Backend RAID Groups from the RG(s) list NOTE: By default, Array Performance report is populated in the Report Type field in the Create Reports page. Click on it again to remove it from selection. About LDEV Activity report You can view the maximum and least busiest LDEVs in an XP or an XP7 disk array through the LDEV Activity report. The LDEV data can be for one of the following metric types: FontEndIO BackEndIO MB Utilization Read Response Time Write Response Time The maximum and least busiest LDEVs are collated based on the maximum and minimum threshold levels you specify, and also the metric type that you select. For the metric type and duration that you specify, the average of the total performance of each LDEV is considered. Further, the average value is verified with the set threshold levels to see if that particular LDEV's performance is above or below the threshold limit. Based on their average values, the LDEVs are grouped in the top 100 busiest or the least 100 busiest LDEVs, and displayed in the CSV file. It implies that only those LDEVs that are above the maximum and below the minimum set threshold limits are considered. The associated driver type is also displayed for each LDEV. This information helps you to identify if the associated drive is supporting the required LDEV performance. If not, move the LDEV to a different drive type. 294 Create an LDEV IO report

295 Create an LDEV Activity report To generate, save, or schedule an LDEV Activity report, follow the procedure given for creating or scheduling a report. For more information, see Schedule reports on page 298. In addition, ensure that the following steps specific to an LDEV Activity report are also completed: 1. Select LDEV Activity from the Report Type list. 2. Select the Metric Type as: FrontEndIO: Select this metric type to view a report of the most active or the least active LDEVs (or both) based on the threshold specified for the total frontend I/Os. BackEndIO: Select this metric type to view a report of the most active or the least active LDEVs (or both) based on the threshold specified for the total backend transfers. MB: Select this metric type to view a report of the most active or the least active LDEVs (or both) based on the threshold specified for the total frontend throughput in MB/s. Utilization: Select this metric type to view a report of the most active or the least active LDEVs (or both) based on the threshold specified for the total RAID group utilization of each LDEV. Read Response Time: Select this metric type to view a report of the most active or the least active LDEVs (or both) based on the threshold specified for the read response time of each LDEV. Write Response Time: Select this metric type to view a report of the most active or the least active LDEVs (or both) based on the threshold specified for the write response time of each LDEV. 3. Provide the Metric Upper Threshold and Metric Lower Threshold limits. The threshold limits that you specify are independent of each other and applicable to only the category that you select. You can set both the maximum and the minimum threshold levels, or one of them based on your requirement. It is not mandatory to specify both the maximum and minimum threshold limits. When you generate, save, or schedule this report, all the LDEVs that are above the specified maximum threshold limit and below the minimum threshold limit are displayed in the report. Reports screen details Screen elements Description Report Name Displays name of the report. The name should not be less than 2 characters or exceed 80 characters in length. This is a mandatory field. Array Name Report Type User Name Displays the array name and the serial number. Displays the type of report that you want on the selected XP or XP7 disk array. Displays name of the user who created the report. For example, the user name is displayed as Administrator against the report that you created, if you logged in to PA as an Administrator. Table Continued Create an LDEV Activity report 295

296 Screen elements File Type Generation Time Start Time End Time Dest. Occurrence Schedule Time Description Displays the format in which you want to view the report. The following are the supported file formats: HTML, PDF, RTF, CSV (only for LDEV activity), and DOCX. Displays the time when the report is created. Displays the start date and time for the schedule. Displays the end date and time for the schedule. PA calculates the end time based on the start time and the number of occurrences that you specify. For example, if Occurrence displays Every Wednesday at 11:00 hrs for a schedule record and the End Time displays :00:00, it implies that the schedule repeats every Wednesday at 11:00 a.m. for three consecutive weeks and ends on at 11:00 hours for the last time, before it is automatically deleted. Displays the address that you provided (applicable only when you schedule a report). If the address is not provided or is invalid, you will not receive any notification though the report is generated. You need to go to the following location and select the report you want to view: <Local_drive>:\%HPSS_HOME%\pa\tomcat \webapps\pa\reports. All the reports are available in this location. (Local drive on the management station refers to the C drive, where the Windows operating system is installed, and the HPSS folder is also copied to the C drive). Displays number of times a particular schedule is repeated. The occurrence is aligned to the selected schedule frequency. In addition, this column also displays the selected schedule frequency and the start time. The format displayed is <schedule frequency> at <start time> (for example, Every Wednesday at 11:00 hrs). Displays the time when you created the schedule. Create Reports screen Collection Period Customer Name Consultant Name Array Location Displays the collection period. Displays the name of the customer or company. Displays the name of the consultant. Displays the location of the XP/XP7 disk array for which the report is generated. This information is useful if the XP/XP7 disk array is located in a different site, away from the management station. Table Continued 296 Reports

297 Screen elements Based on Array Host Group Template Description Displays the option to choose report type by Array or by Host Group for a specified array. Displays DKC or the disk array (serial number) along with the the supported XP disk array models (P9500, XP24000, XP20000, XP12000, and XP10000), and the XP7 disk arrays that are monitored by PA. Displays the host group name. This field is enabled only when Host Group is selected in the Based on option. Displays the option to create reports based on a template. You can choose a template for which you want to generate a report using the Template filter. To create a chart template, see Save template charts. NOTE: The Report Name, Customer Name, Consultant Name, and the Array Location are prepopulated in the respective fields, if you have already configured them as common settings on the Settings screen. These details are applicable for all the reports that you create. If you do not want to use these default descriptions, modify the respective fields. However, the changes are applicable only for the current report that you generate, save, or schedule. Generate and save one-time reports Procedure 1. From the HPE XP7 Performance Advisor main menu, click Reports. 2. In the Reports screen, click Create. 3. In the Create Reports page, set the Collection Period to Collect now. 4. Provide details in the respective fields as follows: Report Name, Customer Name, Consultant Name, and Array Location. 5. In the Based on option, select Array if you want to create report based on arrays, Host Group if you want to create a Host Group-based report, or select Template if you want to create reports based on a template. 6. From the Report Type, select the report type, and from the File Type, choose the type of file that you want. 7. Specify a start and end date and time from the respective calendars. If you retain the default date and time (start date as <current date> 00:00:00 and end date as <current date> 23:59:59), PA generates the report for the current date starting from 00:00:00 to the time when you initiated the report. 8. To create a report, click Generate. PA does not save the report in its database or display records for the report in the Reports section (Reports > View Reports). Instead, view only a temporary copy of the report. The report cannot be retrieved once it is closed. If required, manually save a copy of the report on your system based on the report file format: Generate and save one-time reports 297

298 If the file format is HTML, the report generated is displayed in a new browser window. You can save a copy of the report by clicking File+Save or File+Save As on the browser menu. If the file format is PDF,DOCX or RTF, you are prompted to either open and view the report, or save the report by downloading it to your local system. Based on your requirement, click Open or Save, or click Cancel to cancel the request. The RTF format is not supported for the All report type. Use the DOCX format to view the All report. NOTE: Enable the pop-up blocker in your browser to access a report generated. 9. To save a report, click Save. In the confirmation page, click OK. Check the Event Log for the following informational message: Report successfully saved as <report_name>. PA saves the report in its database and also displays a record for the report in the Reports section in the View Reports table. By default, the new record is displayed at the end of the list. The following details along with those you provided while creating a report are displayed for the report record in the Reports section: User Name: Name of the user who created the report. For example, the user name is displayed as Administrator against the report that you created, if you logged in to PA as an Administrator. Generation Time: The time when the report is created. Schedule reports Procedure 1. From the HPE XP7 Performance Advisor main menu, select Reports. 2. In the Reports screen, click Create. 3. In the Create Reports page, set the Collection Period as Schedule. 4. Provide details in the respective fields as follows: Customer Name, Consultant Name, and Array Location. 5. In the Based on option, select Array if you want to schedule report based on arrays, Host Group if you want to create a Host Group-based report, or select Template if you want to create reports based on a template. 6. From the Array list, choose the array, if you want to schedule report by host group, select from the Host Group list. If you want to schedule reports on template metrics, from the Template list, select the desired template. NOTE: You cannot combine other reports with templates. 7. From the Report Type, select the report type, and from the File Type, choose the type of file that you want. 298 Schedule reports

299 8. Set the Collection Period as Daily, Weekly or Monthly. By default, Weekly is selected as the collection schedule. a. If you select Weekly as the collection period, then specify the Day of the Week. This drop-down list displays the list of week days. Select the week day when you want the schedule to be executed. Also choose a Start Time from the drop-down. b. If you select Monthly as the collection schedule, the Monthly Scheduleis displayed. The following options are provided in a monthly schedule: Based on Date, where you select a particular date in a month. The Date of the Month list is enabled when you select Day, so you can choose a date of your choice. Based on Day, where you select a particular day in a week. The Day of the Week and Week of the Month lists are enabled when you select Based on Day. Choose the day and the corresponding week in a month for executing the schedule. c. If you select Daily, provide the start and end time for the schedule. Select the Start Time as the time when you want the schedule to be executed. The Start Time list displays the time in a 24 hour format. After a report is created as per the schedule, PA sends a notification informing the status of the report execution to the specified address. 9. In the No. of Occurrences box, provide the number of times the schedule must be executed. It is mandatory to provide the number of times a schedule must be repeated (no. of occurrences). For example, if you select Daily as the schedule frequency, the occurrence as 1, and start time as 9:00 a.m., it implies that the schedule is executed only once at 9:00 a.m. on that particular day. PA generates a report that provides data for the past 24 hours considering that 9:00 a.m. is the start time. The Start Time and the No. of Occurrences are common for the Daily, Weekly, and the Monthly collection schedules. 10. In the box, provide the recipient address of the user who has to receive a notification when the report is executed as per the schedule. The report is provided as an attachment to the notification. By default, notifications are sent to administrator@localhost, which is the common destination address for all report notifications. You can also specify a different destination address on the Settings screen. The report notifications generated thereafter are redirected to the new destination address. For more information, see Configure reports settings. 11. Click Save. PA does the following: a. As per the specified start time and schedule frequency, HPE XP7 Performance Advisor creates a report and adds a record for that report in the Reports section (Reports > View Reports). b. Provides the report as a file attachment to the notification that is sent to the intended recipients. Click Reset anytime to clear the current selections and restore the default settings. NOTE: To learn what to infer from the data displayed in the Scheduled section, see Understanding report schedule records. Reports 299

300 Report schedule examples This section describes what to infer from the data displayed in the Schedules section (Reports > View Reports). Example 1: The Schedule Time for a schedule displays :39:28, which means that the schedule is created on 9th September 2016 at 17:39:28 hours. The Occurrence for this schedule displays Every Wednesday at 11:00 hrs, which means PA is supposed to generate a report every Wednesday at 11:00 hours. So, the schedule is active and a report is generated only on the 10th September 2016 at 11:00 hours. The End Time for this schedule displays :00:00, which means the last report that PA generates is on 24th September 2016 at 11:00 hours. This is because, while creating the schedule, the number of times it should repeat was entered as 3 in the Occurrence box. It implies that PA repeats the schedule only three times before it is automatically deleted. Hence, in addition to 10th September 2016, PA also executes the schedule on the 17th and 24th September Example 2: The Schedule Time for a schedule displays :43:09, which means that the schedule is created on 9th September 2016 at 17:43:09 hours. The Occurrence for this schedule displays Day 1 of every month at 9:00 hrs, which means PA is supposed to generate a report on the first day of every month at 9:00 hours. Hence, the schedule is active and a report is generated only the month after September, on 1st October 2016 at 9:00 hours. The End Time for this schedule displays :00:00, which means the last report that PA generates is on 1st November 2016 at 09:00 hours. This is because, while creating the schedule, the number of times it must repeat is given as 2 in the Occurrence box, which implies that the schedule repeats only twice before it is automatically deleted. Hence, in addition to 1st October 2016, PA also executes the schedule on 1st November Example 3: The Schedule Time for a schedule displays :15:02, which means that the schedule is created on 10th September 2016 at 00:15:02 hours. The Occurrence for this schedule displays Daily at 19:00 hrs, which means PA is supposed to generate a report daily at 19:00 hours. Hence, the schedule is active and a report is generated only the day after 9th September 2016, on 10th September 2016 at 19:00 hours. The End Time for this schedule displays :00:00, which means that the last report that PA generates is on 10th September 2016 at 19:00 hours. This is because, while creating the schedule, the number of times it must repeat is given as 1 in the Occurrence box. It implies that the schedule executes only once before it is automatically deleted. Example 4: The Schedule Time for a schedule displays :35:06, which means that the schedule is created on 9th September 2016 at 19:35:06 hours. The Occurrence for this schedule displays The Second Thursday of Every Month at 00:00 hrs, which means PA is supposed to generate a report on the second Thursday of every month at 00:00 hours. Hence, the schedule is active and a report is generated on 11th September 2016 at 00:00 hours. The End Time for this schedule displays :00:00, which means the last report that PA generates is on 11th December 2016 at 00:00 hours. This is because, while creating the schedule, the number of times it should repeat is given as 4 in the Occurrence box. It implies that the schedule repeats only four times before it is automatically deleted. Hence in addition to 11th September 2016, PA also executes the schedule on 9th October, 13th November, and 11th December Report schedule examples

301 View reports Procedure 1. From the HPE XP7 Performance Advisor main menu, click Reports > View Reports. 2. From View Reports, click on a report you want to view. If the file format is HTML or CSV, the report is displayed in a new IE browser window. You can save a copy of the report by clicking File > Save or File > Save As on the browser menu. If the file format is PDF or RTF, you are prompted to either open and view the report, or save the report by downloading it to your local system. Based on your requirement, click Open or Save, or click Cancel to cancel the request. NOTE: In the report template, a maximum of 50 components are supported per metric. Delete reports Procedure 1. From the HPE XP7 Performance Advisor main menu, click Reports > View Reports. 2. In View Reports, select the report record that you want to delete. 3. Click Delete. Click OK when prompted to confirm. The report copy is also deleted from the <Local_drive>:\%HPSS_HOME%\pa\tomcat\webapps\pa \reports folder. Enable notifications For PA to dispatch report notifications to the intended recipients, you must add the IP and port addresses of the source SMTP server, and also specify the source address. For more information, see Configure reports settings on page 153. Delete report schedules Procedure 1. From the HPE XP7 Performance Advisor main menu, click Reports. 2. In Scheduled Reports, select the schedule record that you want to delete. 3. Click Delete. Click OK when prompted to confirm. View reports 301

302 Virtualization for reports PA maintains a temporary buffer in a folder called the Virtualizer folder for the report data that is being generated. This is useful if the report is for viewing large number of LDEVs. In such cases, PA uses certain amount of the management station's disk space to temporarily store the data till the entire report gets generated. Once the report is completely generated, delete the cached report file from the Virtualizer folder to release the disk space for other activities. When the first report is created, PA creates a folder called the Virtualizer folder in the HPSS folder. If you want to change the location of the Virtualizer folder, edit the following line in the Cache path for Reports module section, which is located at the end of the ServerParameters.Properties file: # Cache path for Reports module # ReportFileVirtualizerPath=.\\Virtualizer\\ IMPORTANT: Ensure that '\\' is retained when mentioning the path for the Virtualizer folder. Log report details and exceptions When a report is generated, manually or through the schedule, report details and exceptions are logged in the pa.log and Log4J API files respectively. By default, only error conditions are logged. To set the level of tracing: 1. Stop the PA service from Start > Programs > HPE XP7 Performance Advisor > Stop services. 2. Navigate to the following location: <install directory>:\hpss\pa\tomcat\webapps\pa \WEB-INF\ and open the Log4J.properties file using a text editor. 3. Remove the comment by deleting the # for the line log4j.rootlogger=all, DEFAULT_CONSOLE, DEFAULT_LOG, DEFAULT_SOCKET under Debug Mode, and save the changes. 4. Restart the PA service from Start > Programs > HPE XP7 Performance Advisor > Restart services. You can access the pa.log file from the following location: <install directory>:\hpss\pa \tomcat\log\ 302 Virtualization for reports

303 Launch PA from other Storage products About launching PA from HPE XP7 Tiered Storage Manager HPE XP7 Tiered Storage Manager is used to perform migration, where the data stored on predefined set of volumes is moved to another set of volumes with the same characteristics. Thus, archiving data and freeing up the current volume for use by other applications. There can be situations where data residing on LDEVs in the XP disk arrays is not frequently accessed. Such data can be moved to a lower performing tier. The LDEVs that are experiencing high I/Os should be moved to a higher performing tier, or to a lower utilized RAID Group within the same tier to balance performance. To identify the set of LDEVs for data migration, use PA where you can view the usage data for the LDEVs and the related RAID Groups in the form of charts. PA displays charts for the selected LDEVs, for the specified metric category, metric, and duration. You can analyze the charts to know the LDEVs and the related RAID Groups that have less frequently used data. The charts displayed show the read/write and the I/Os for the selected LDEVs and their associated RAID Groups. Further, based on the data projected, you can decide on the data that needs to be migrated to lesser used volumes. Use HPE XP7 Tiered Storage Manager to move data from the source volumes to other target volumes that satisfy the performance (Service Level Objectives) for that data. This is especially useful in cases where the performance required for the volumes changes with the passage of time. IMPORTANT: Launch PA from HPE XP7 Tiered Storage Manager to view the usage data of the LDEVs and the related RAID Groups only for the XP, P9500, and XP7 disk arrays. You can launch PA from HPE XP7 Tiered Storage Manager v onwards. Once launched, the charts for the LDEVs are displayed in the PA GUI for all the XP disk arrays that PA supports. For more information, see the manuals set provided for the HPE XP7 Tiered Storage Manager Software the HPE Manuals page. You can launch PA for the Migration Group volumes and the Storage Tier volumes, and also in the Create Migration Task operation to facilitate selection of source and target volumes. IMPORTANT: The locations of the PA management station and other parameters are defined in the HPE XP7 Tiered Storage Manager hppa.properties file. For more information, see the HPE XP7 Tiered Storage Manager Software Administrator Guide. If you have already logged in to PA using the specified management station address, the PA login screen is not displayed. Instead, the PA screen is displayed, where you can select the metric and the duration to view the graphs. View performance graphs for LDEVs Procedure 1. To access PA from HPE XP7 Tiered Storage Manager and view the charts for the LDEVs that belong to a storage domain: On HPE XP7 Command View Advanced Edition Suite, click the Mobility tab to Launch PA from other Storage products 303

304 view the list of the logical groups created. The logical groups contain the list of volumes which are grouped logically to migrate. 2. From the list of logical groups, select the group for which you want view the performance graphs of the associated LDEVs. The logical groups and the volumes in the groups are displayed. 3. Click Logical Groups. 4. Click the logical group for which you want to analyze performance. All the LDEVs that belong to the selected migration group are displayed under the Volumes tab. 5. Under the Volume tab, select the LDEV records for which you want to view their usage and I/O details. 6. Click Analyze Performance. The PA login page is displayed. 7. Enter your user name, password, and click Login. By default, the LDEVs component screen displays, and you can plot the performance graphs for the selected LDEVs, monitor the associated components. NOTE: Once you login, the current session is valid for 24 hours. You can also use the other PA screens to perform tasks, such as generating reports and viewing events. For more information on the functionality and related procedures, see the individual chapters in this guide. 304 Launch PA from other Storage products

305 View performance graphs for RAID Groups Procedure 1. To access PA from HPE XP7 Tiered Storage Manager and view the charts for the RAID Groups that belong to a storage domain, On HPE XP7 Command View Advanced Edition Suite, click the Mobility tab to view the list of the THP pools created for each array. 2. From the list of THP pool, select the THP pools for which you want to plot data for the parity groups from which the THP pool is created. 3. Click the Parity Groups tab. 4. Under the Parity Groups tab, select the RAID Group records for which you want to view their usage and I/O details. 5. Click Analyze Performance. The PA login page is displayed. 6. Enter your user name, password, and click Login. By default, the RAID Groups component screen is displayed. From the master pane, select the RG records that you want to plot. Use the Actions menu in the detail pane to add more metrics in the Chart View. NOTE: Once you login, the current session is valid for 24 hours. View performance graphs for RAID Groups 305

306 Support and other resources Accessing Hewlett Packard Enterprise Support For live assistance, go to the Contact Hewlett Packard Enterprise Worldwide website: assistance To access documentation and support services, go to the Hewlett Packard Enterprise Support Center website: Information to collect Technical support registration number (if applicable) Product name, model or version, and serial number Operating system name and version Firmware version Error messages Product-specific reports and logs Add-on products or components Third-party products or components Accessing updates Some software products provide a mechanism for accessing software updates through the product interface. Review your product documentation to identify the recommended software update method. To download product updates, go to either of the following: Hewlett Packard Enterprise Support Center Get connected with updates page: Software Depot website: To view and update your entitlements, and to link your contracts and warranties with your profile, go to the Hewlett Packard Enterprise Support Center More Information on Access to Support Materials page: Support and other resources

307 IMPORTANT: Access to some updates might require product entitlement when accessed through the Hewlett Packard Enterprise Support Center. You must have an HP Passport set up with relevant entitlements. Websites Website Contact Hewlett Packard Enterprise Worldwide Subscription Service/Support Alerts Software Depot Customer Self Repair Link Documentation Hewlett Packard Enterprise Information Library RMC Documentation on Hewlett Packard Enterprise Information Library Hewlett Packard Enterprise Support Center Single Point of Connectivity Knowledge (SPOCK) Storage Compatibility Matrix Storage White Papers Customer self repair Hewlett Packard Enterprise customer self repair (CSR) programs allow you to repair your product. If a customer self-replaceable part must be replaced, it will be shipped directly to you so that you can install it at your convenience. Some parts do not qualify for CSR. Your Hewlett Packard Enterprise authorized service provider will determine whether a repair can be accomplished by CSR. For more information about CSR, contact your local service provider or go to the CSR website: Remote support Remote support is available with supported devices as part of your warranty or contractual support agreement. It provides intelligent event diagnosis, and automatic, secure submission of hardware event notifications to Hewlett Packard Enterprise, which will initiate a fast and accurate resolution based on your product's service level. Hewlett Packard Enterprise strongly recommends that you register your device for remote support. If your product includes additional remote support details, use search to locate that information. Remote support and Proactive Care information Websites 307

308 HPE Get Connected HPE Proactive Care services HPE Proactive Care service: Supported products list HPE Proactive Care advanced service: Supported products list Proactive Care customer information Proactive Care central Proactive Care service activation Document feedback Hewlett Packard Enterprise is committed to providing documentation that meets your needs. To help us improve the documentation, send any errors, suggestions, or comments to Documentation Feedback When submitting your feedback, include the document title, part number, edition, and publication date located on the front cover of the document. For online help content, include the product name, product version, help edition, and publication date located on the legal notices page. 308 Document feedback

309 Logical partitions Storage management logical partitions (SLPRs) A disk array can be shared with the multiple organizations and with multiple departments within an enterprise. Therefore, multiple administrators might manage a single disk array. This circumstance creates the potential for an administrator to destroy volumes of other organizations, and it can complicate and increase the difficulty of managing the disk array. Use Disk/Cache Partition to allocate all components of one disk array (all ports and CLPRs) to virtual disk arrays called SLPRs. You can create up to 31 SLPRs in one disk array. Each virtual disk array can be accessed only by its administrator. This approach eliminates the risk of an administrator destroying volumes from other organizations and of data leaks among organizations. In a non-partitioned environment, a full array is considered one single partition SLPR0. After the disk array is partitioned, SLPR0 becomes the unpartitioned portion of the disk array. Similarly, CLPR0 contains all parity groups (PGs) and cache in the non-partitioned environment. After the disk array is partitioned, CLPR0 contains the remaining PGs and cache that are not allocated to other CLPRs. Figure 10: Example of an SLPR The figure above displays an example of one disk array partitioned into two virtual disk arrays. Each virtual disk array is allocated to one enterprise. Enterprise A's disk array administrator can manage enterprise A's virtual disk array, but cannot manage enterprise B's disk array. Similarly, enterprise B's disk Logical partitions 309

310 array administrator can manage enterprise B's virtual disk array, but cannot manage enterprise A's disk array. Cache logical partitions (CLPRs) When one disk array is shared with multiple hosts, and one host reads or writes a large amount of data, the host's read and write data occupies a large area in the disk array's cache memory. In this situation, the I/O performance of other hosts decreases because the hosts must wait to write to cache memory. To prevent this situation, CLPR partitions the disk array's cache memory. Partitioned cache memories are used as virtual cache memories, and each is allocated to each host. This approach minimizes the effects of one administrator's operations on the volumes of other administrators. Figure 11: Example of a CLPR The figure above displays how a corporation's cache memory is partitioned to three virtual cache memories. Although the Branch A host is inputting and outputting a large amount of data, the Branch B and Branch C hosts are unaffected because each branch is allocated 40 GB CLPR. 310 Cache logical partitions (CLPRs)

311 Sample reports PA supports report generation for the following categories: Array performance report on page 311 LDEV IO report on page 317 RAID Group Utilization Report on page 321 Cache utilization report on page 322 ACP utilization report on page 326 CHIP utilization report on page 328 ThP Pool Occupancy report on page 330 Snapshot Pool Occupancy report on page 330 Continuous Access Journal Group utilization report on page 330 LDEV Activity report on page 331 Export Database report on page 333 All report on page 333 MP blade utilization report on page 334 You must install the Acrobat Reader to view reports in the PDF format. The reports can be generated in the HTML, RTF, PDF, and the CSV formats. The sample reports are given below. Array performance report The Array Performance report provides the overall performance of an XP or an XP7 disk array by measuring the total I/Os and the read and write I/Os on that array. The Array Performance report comprises of the following reports: Total I/O Rate Total I/O Rate by hour of day Total I/O Rate Detail Read-Write Ratio Read-Write Ratio by hour of day Read-Write Detail Max/Min Frontend Port IOPS Max/Min Frontend Port MB/s In addition, it includes a section called Findings at the beginning of the report. Sample reports 311

312 IMPORTANT: The Findings section for an XP disk array provides a brief summary on the status of the CHIPs, cache, ACP, and the LDEVs. The Findings section for an P9500/XP7 disk array provides a brief summary on the status of the cache, LDEVs, and the MP blades. The utilization summary of the CHIP/CHA and the ACP/DKA MPs are not displayed in the Array Performance report - Findings section for the P9500/XP7 disk arrays. NOTE: The backend transfer is the block of data that is transferred between the XP/XP7 disk array cache and the RAID groups. Every read cache miss results in a backend transfer. A sample of each report is given below: Total I/O Rate report The Total I/O Rate report displays in a chart format, the number of total read and write I/O operations over the entire period. Figure 12: Total I/O Rate on page 312 displays a sample Total I/O Rate report for the P9500 Disk Array. Figure 12: Total I/O Rate The total backend transfers may be compared to the total frontend I/Os and the difference is due to the effects of the array cache. The total backend transfers load is taken by the RAID Groups and ACP/DKA pairs, where as the total frontend I/O load is taken by the CHIP/CHA ports. NOTE: If there are no data points available for the dates selected, a blank chart is displayed. If all the data values are zero for the dates selected, a chart with a horizontal line along X axis is displayed in the center of the chart. 312 Total I/O Rate report

313 Total I/O Rate by hour of day report The Total I/O Rate by hour of day report displays in a chart format, the number of total read and write I/O operations per second over the over 24-hour period. Figure 13: Total I/O Rate by hour of day on page 313 displays a sample Total I/O Rate by hour of day report for a P9500 Disk Array. Figure 13: Total I/O Rate by hour of day The total backend transfers may be compared to the total frontend I/Os and the difference is due to the effects of the array cache. The total backend transfers load is taken by the RAID Groups and ACP/DKA pairs, where as the total frontend I/O load is taken by the CHIP/CHA ports. NOTE: If there are no data points available for the dates selected, blank chart is displayed. If all the data values are zero for the dates selected, a chart with a horizontal line along the X axis is displayed in the center of the chart. IMPORTANT: For the Hour of the Day report, all the points collected aggregate to the start of the hour. For example, if data is collected between 1 p.m. and 2 p.m., the aggregate data is displayed at 1 p.m. instead of 2 p.m. Total I/O Rate Detail report The Total I/O Rate Detail report displays in a chart format, the number of Sequential I/Os, Random I/Os, and CFW I/O operations, over the entire period. Figure 14: Total I/O Rate Detail on page 314 displays a sample Total I/O Rate Detail report for a P9500 Disk Array. Total I/O Rate by hour of day report 313

314 Figure 14: Total I/O Rate Detail The sequential frontend I/Os are when data is read from or written to consecutive addresses. The random frontend I/Os are when applications address non-consecutive blocks of data. CFWs are a special class of I/Os generated by HPE's XP7 Continuous Access Remote Mirroring software. NOTE: If there are no data points available for the dates selected, blank chart is displayed. If all the data values are zero for the dates selected, a chart with a horizontal line along X axis is displayed in the center of the chart. Read/Write Ratio report The Read/Write Ratio report displays in a chart format, the ratio of read activity to write activity, over the entire period. It is for both sequential and random read, or write activity. Figure 15: Read/Write Ratio on page 314 displays a sample Read/Write Ratio report for a P9500 Disk Array. Figure 15: Read/Write Ratio For example, the data point of X on the graph indicates X% read activity and (100-X)% of write activity. 314 Read/Write Ratio report

315 NOTE: If there are no data points available for the dates selected, blank chart is displayed. If all the data values are zero for the dates selected, a chart with a horizontal line along X axis is displayed in the center of the chart. Read/Write Ratio by hour of day report The Read/Write Ratio by hour of day report displays in a chart format, the ratio of read activity to write activity, over a 24 hour period. It is for both sequential and random read, or write activity. Figure 16: Read/Write Ratio by hour of day on page 315 displays a sample Read/Write Ratio by hour of day report for a P9500 Disk Array. Figure 16: Read/Write Ratio by hour of day NOTE: If there are no data points available for the dates selected, blank chart is displayed. If all the data values are zero for the dates selected, a chart with a horizontal line along X axis is displayed in the center of the chart. Read/Write Detail report The Read/Write Detail report displays in a chart format, the total I/Os separated into different I/O types. This includes sequential reads and writes and random reads and writes, displayed as the number of I/O operations per second. The graph provides more detail than the previous graphs about the types of I/Os occurring on an XP disk array. Figure 17: Read/Write Detail on page 316 displays a sample Read/Write Detail by hour of day report for a P9500 Disk Array. Read/Write Ratio by hour of day report 315

316 Figure 17: Read/Write Detail NOTE: If there are no data points available for the dates selected, blank chart is displayed. If all the data values are zero for the dates selected, a chart with a horizontal line along X axis is displayed in the center of the chart. Max/Min Frontend Port IOPS report The Max/Min Frontend Port IOPS report displays in a chart format, the total maximum and minimum frontend port I/O operations per second over the entire data collection period. The figure below displays a sample Max/Min Frontend Port IOPS report for a P9500 Disk Array. Figure 18: Read/Write Detail NOTE: If there are no data points available for the dates selected, blank chart is displayed. If all the data values are zero for the dates selected, a chart with a horizontal line along X axis is displayed in the center of the chart. 316 Max/Min Frontend Port IOPS report

317 Max/Min Frontend Port MB/s report The Max/Min Frontend Port MB/s report displays in a chart format, the total maximum and minimum frontend port MB/s over the entire data collection period. The figure below displays a sample Max/Min Frontend Port MB/s report for a P9500 Disk Array. Figure 19: Read/Write Detail NOTE: If there are no data points available for the dates selected, blank chart is displayed. If all the data values are zero for the dates selected, a chart with a horizontal line along X axis is displayed in the center of the chart. LDEV IO report The LDEV IO report provides data on the busiest frontend and the backend LDEVs and RAID Groups on an XP or an XP7 disk array. It is based on the frontend I/Os and the backend transfers. You can view the report for busiest frontend and backend LDEVs, and 8-32 busiest frontend and backend RAID Groups. The selection is in multiples of eight. If you do not select any value from the respective drop-down lists, by default, the LDEV IO report is generated for the eight busiest frontend and eight backend LDEVs, and eight frontend and eight backend RAID Groups. Further, the report displays the graphs for only those LDEVs that have the associated I/Os and those RAID Groups on which the I/Os transactions have occurred. Consider the following example: A report is created to view 32 busiest frontend LDEVs and 16 busiest frontend RAID Groups, and only eight of the selected 32 LDEVs and four of the selected 16 RAID Groups are busy. PA generates the LDEV IO report where you can view the graphs for only the eight LDEVs and four RAID Groups on which the maximum I/O transactions have occurred. The graphs are not shown for the remaining LDEVs or the RAID Groups. The LDEV IO report also provides a link to the additional LDEV IO mapping information. The busiest LDEVs are displayed at different ranks in a tabular format. Max/Min Frontend Port MB/s report 317

318 In the LDEV I/O Mapping table: Hyphen (-) is displayed in the RAID Format column if that RAID format is not applicable for THP Pool V-Vols. Hyphen (-) is displayed in the LUSE Master column if the LDEV record is not a LUSE Master. So, the LDEV will either be a LUSE component or an individual volume (not part of any LUSE). Hyphen (-) is displayed in the LUSE Status column if the LDEV record is neither a LUSE master nor a LUSE component. The LUSE Status is not applicable for such LDEV records. A sample of each report is given below: Total Backend I/O Rate First Top 8 LDEVs report The Total Backend I/O Rate First Top 8 LDEVs report displays in a chart format, the real backend I/O rate of the busiest eight LDEVs. This can be compared to the potential maximum throughput of the hardware. The maximum throughput varies depending on RAID level and disk mechanism type and other factors such as the size of the individual I/Os. Figure 20: Total Backend I/O Rate First Top 8 LDEVs on page 319 displays a sample Total Backend I/O Rate First Top 8 LDEVs report for the XP1024 Disk Array. 318 Total Backend I/O Rate First Top 8 LDEVs report

319 Figure 20: Total Backend I/O Rate First Top 8 LDEVs Total Backend I/O Rate First Top 8 RAID Groups report The Total Backend I/O Rate First Top 8 RAID Groups report displays in a chart format, the real backend I/O rate for the busiest eight RAID groups. This can be compared to the potential maximum throughput of the hardware. The maximum throughput varies depending on RAID level, disk mechanism type, and other factors such as the size of the individual I/Os. Figure 21: Total Backend I/O Rate First Top 8 RAID Groups on page 319 displays a sample Total Backend I/O Rate First Top 8 RAID Groups report for the XP1024 Disk Array. Figure 21: Total Backend I/O Rate First Top 8 RAID Groups Total Backend I/O Rate First Top 8 RAID Groups report 319

320 Total Frontend I/O Rate First Top 8 LDEVs report The Total Frontend I/O Rate First Top 8 LDEVs report displays in a chart format, the number of I/Os operations performed by the first set of busiest eight LDEVs. Figure 22: Total Frontend I/O Rate First Top 8 Ldevs on page 320 displays a sample Total Frontend I/O Rate First Top 8 LDEVs report for the XP1024 Disk Array. Figure 22: Total Frontend I/O Rate First Top 8 Ldevs Total Frontend I/O Rate First Top 8 RAID Groups/Pools report The Total Frontend I/O Rate First Top 8 RAID Groups/Pools report displays in a chart format, the number of I/O operations performed by the eight busiest RAID groups or pools. Pools can either be the ThP pool or the snapshot pool. Figure 23: Total Frontend I/O Rate First Top 8 Array Groups/Pools on page 320 displays a sample Total Frontend I/O Rate First Top 8 RAID Groups/Pools report for the XP1024 Disk Array. Figure 23: Total Frontend I/O Rate First Top 8 Array Groups/Pools 320 Total Frontend I/O Rate First Top 8 LDEVs report

321 RAID Group Utilization Report The RAID Group Utilization report consists of four charts that display the utilization of the top 32 RAID Groups, split into eight each. The RAID Group utilization indicates the total utilization of a RAID Group over an entire collection interval. Figure 24: RAID Group Utilization First top 8 RAID Groups on page 321 displays a sample RAID Group Utilization report that provides the first top eight RAID Groups for a P9500 Disk Array. Figure 24: RAID Group Utilization First top 8 RAID Groups The report displays the utilization graphs for only those RAID Groups that have managed the backend transfers. When a RAID Group is associated with a ThP pool, the extent of RAID Group utilization due to I/Os occurring on a ThP pool is considered. RAID Group Utilization Report 321

322 Cache utilization report The cache utilization reports allow you to view in a chart format, the utilization of cache in the XP/XP7 disk array, the amount of data in the cache that is waiting to be written to a disk, read hits as a percentage of total read operations, the total number of transfers per second, the total number of transfers over 24- hour, cache side file utilization for the continuous access asynchronous activity, cache partition write pending rate for each MP blade, and cache partition utilization for each MP blade. A sample of each report is given below: Cache Utilization report The Cache Utilization report displays in a chart format, the cache utilization in an XP or an XP7 disk array. Figure 25: Cache Utilization on page 322 displays a sample Cache Utilization report for a P9500 Disk Array. Figure 25: Cache Utilization Cache Write Pending report The Cache Write Pending report displays in a chart format, the amount of data in the cache waiting to be written to a disk. It helps determine the amount of cache available. Figure 26: Cache Write Pending on page 323 displays a sample Cache Write Pending report for a P9500 Disk Array. 322 Cache utilization report

323 Figure 26: Cache Write Pending Percentage Read Hits report The Percentage Read Hits report displays in a chart format, cache read hits as a percentage of the total cache read operations. Figure 27: Percentage read hits on page 323 displays a sample Percentage Read Hits report for a P9500 Disk Array. Figure 27: Percentage read hits Total Backend Transfer report The Total Backend Transfer report displays in a chart format, the total number of transfers, sequential, random drive-to-cache, and cache-to-drive, per second. Percentage Read Hits report 323

324 Figure 28: Total Backend Transfer report on page 324 displays a sample Total Backend Transfer report for a P9500 Disk Array. Figure 28: Total Backend Transfer report Total Backend Transfer by Hour of the Day report The Total Backend Transfer by Hour of the Day report displays in a chart format, the total number of transfers, both sequential and random drive-to-cache transfers, and all cache-to-drive transfers, averaged over a 24-hour period. Figure 29: Total Backend Transfer by Hour of the Day on page 324 displays a sample Total Backend Transfer by Hour of the Day report for a P9500 Disk Array. Figure 29: Total Backend Transfer by Hour of the Day Cache Side File Utilization report The Cache Side File Utilization report displays in a chart format, the cache side file utilization. The cache side file utilization is used for the Continuous Access Async Software. It holds the data buffers that have not been acknowledged by the remote host. 324 Total Backend Transfer by Hour of the Day report

325 Figure 30: Cache Side File Utilization on page 325 displays a sample Cache Side File Utilization report for a P9500 Disk Array. Figure 30: Cache Side File Utilization Figure 31: CLPR MP Blade Write Pending Rate Sample reports 325

326 Figure 32: CLPR MP Blade Usage Rate ACP utilization report IMPORTANT: The utilization metrics on the ACP/DKA MPs are not displayed for the P9500/XP7 disk arrays. They are included as part of the utilization metrics displayed for the MP Blades in the P9500/XP7 disk arrays. The ACP utilization reports allow you to view in a chart format, the average utilization of the various installed ACP/DKA pairs either over the entire period or over every hour of a day. A sample of each report is given below: ACP Utilization report The ACP Utilization report displays in a chart format, the average utilization of the installed ACP/DKA pairs over the entire period. Figure 33: ACP utilization over the entire period on page 327 displays a sample ACP Utilization report for an XP24000 Disk Array. 326 ACP utilization report

327 Figure 33: ACP utilization over the entire period ACP Utilization by Hour of the Day report The ACP Utilization by Hour of the Day report displays in a chart format, the average utilization of the installed ACP/DKA pairs over a 24-hour period. Figure 34: ACP utilization over a 24-hour period on page 327 displays a sample ACP Utilization by Hour of the Day report for an XP24000 Disk Array. Figure 34: ACP utilization over a 24-hour period ACP Utilization by Hour of the Day report 327

328 CHIP utilization report IMPORTANT: The utilization metrics on the CHIP/CHA MPs are not displayed for the P9500/XP7 disk arrays. They are included as part of the utilization metrics displayed for the MP blades in the P9500/XP7 disk arrays. The CHIP utilization reports allow you to view in a chart format, the utilization data for all the installed CHIPs/CHAs in the array, and the average utilization data for all the installed CHIPs/CHAs in an XP disk array. While generating a CHIP utilization report, you can select one or more CHIPs from the Available CHIPs list. If you select All and set the long duration, HPE XP7 Performance Advisor takes more time to generate the reports. HPE recommends that you select few CHIPs when the duration is long. A sample of each report is given below: CHIP Utilization report The CHIP Utilization report displays in a chart format, the utilization data for all the installed CHIPs/CHAs in an XP disk array. Figure 35: CHIP Utilization on page 328 displays a sample CHIP Utilization report for an XP24000 Disk Array. Figure 35: CHIP Utilization CHIP Utilization by Hour of the Day report The CHIP Utilization by Hour of the Day report displays in a chart format, the utilization data for all the installed CHIPs/CHAs in the array averaged over a 24-hour period. Figure 35: CHIP Utilization on page 328displays a sample CHIP Utilization by Hour of the Day report for an XP24000 Disk Array. 328 CHIP utilization report

329 Figure 36: CHIP Utilization by Hour of the Day CHIP Processor Utilization report The CHIP processor utilization report displays in a chart format, the individual MP utilization on an installed CHIP/CHA. Figure 37: CHIP Processor Utilization on page 329 displays a sample CHIP processor utilization report for an XP24000 Disk Array. Figure 37: CHIP Processor Utilization In this sample report, the individual MP utilization for the CHA 1E is displayed. Similarly a report is generated for all the installed CHIPs/CHAs. CHIP Processor Utilization report 329

330 ThP Pool Occupancy report The THP Pool Occupancy report provides the usage percentage of the eight busiest ThP pools. Following are the types of charts that are available when you generate ThP Pool Occupancy report for XP, XP7 or P9500 Disk Arrays: Total THP Pool Utilization, First Top 8 THP pools, shows the pool occupancy by the 8 THP pools. Total Front-end I/O Rate, First Top 8 Pools, shows the number of I/O operations performed by the first busiest 8 THP pools. Total Front-end MB Rate, First Top 8 Pools, shows the number of MB operations performed by the first busiest 8 THP pools. Total Back-end Track I/O Rate, First Top 8 Pools, showing the real back-end I/O rate sustained by the first busiest 8 THP pools. Max read response time, first top 8 pools, shows the max read response times for the busiest 8 pools of the XP disk array. The max read response time depends on factors such as, but not limited to, the RAID level and disk mechanism type configured in the pool, pattern of the I/Os (sequential or random), size of the individual I/Os and cache configurations. Max write response time, first top 8 pools, shows the max write response times for the busiest 8 pools of the XP disk array. The max write response time depends on factors such as, but not limited to, the RAID level and disk mechanism type configured in the pool, pattern of the I/Os (sequential or random), size of the individual I/Os and cache configurations. Average read response time, first top 8 pools, shows the average read response times for the busiest 8 pools of the XP disk array. The average read response time depends on factors such as, but not limited to, the RAID level and disk mechanism type configured in the pool, pattern of the I/Os (sequential or random), size of the individual I/Os and cache configurations. Average write response time, first top 8 pools, shows the average write response times for the busiest 8 pools of the XP disk array. The average write response time depends on factors such as, but not limited to, the RAID level and disk mechanism type configured in the pool, pattern of the I/Os (sequential or random), size of the individual I/Os and cache configurations. Snapshot Pool Occupancy report The Snapshot Pool Occupancy report provides the usage percentage of the eight busiest snapshot pools. NOTE: PA reports only those snapshot volumes in an array that are assigned to a pool. Continuous Access Journal Group utilization report The Journal Pool Utilization report displays the utilization percentage of the eight busiest Journal groups. Figure 38: Continuous Access Journal group utilization on page 331 displays a sample Continuous Access Journal Group Utilization report for a P9500 Disk Array. 330 ThP Pool Occupancy report

331 Figure 38: Continuous Access Journal group utilization LDEV Activity report You can view the maximum and least busiest LDEVs in an XP or an XP7 disk array through the LDEV Activity report. The LDEV data can be for one of the following metric types: FontEndIO BackEndIO MB Utilization Read Response Time Write Response Time The maximum and least busiest LDEVs are collated based on the maximum and minimum threshold levels you specify, and also the metric type that you select. For the metric type and duration that you specify, the average of the total performance of each LDEV is considered. Further, the average value is verified with the set threshold levels to see if that particular LDEV's performance is above or below the threshold limit. Based on their average values, the LDEVs are grouped in the top 100 busiest or the least 100 busiest LDEVs, and displayed in the CSV file. It implies that only those LDEVs that are above the maximum and below the minimum set threshold limits are considered. LDEV Activity report 331

332 Figure 39: LDEV Activity report IMPORTANT: The threshold limits that you specify are independent of each other and applicable to only the category that you select. You can set both the maximum and minimum threshold levels, or one of them based on your requirement. The report also provides the associated drive types for the LDEVs. This information helps you to identify if the associated drive is supporting the required LDEV performance. If not, move the LDEV to a different drive type. 332 Sample reports

333 Export Database report The Export Database report provides a.csv as the output. You can use the.csv file to export data to a data visualization program, such as the Microsoft Excel. The data can be used for charting or graphing, and can also include the Ext-Lun information. Figure 40: Export Database report (Human readable format) For more information on the different.csv files that are generated for an XP or XP7 disk array, see Export DB CSV files on page 274. All report Based on whether you generate the All report for an XP disk array or an XP7 disk array, the All report consolidates data and provides a single report for the following reports in the selected date and time range: XP Disk Array: Array Performance LDEV IO RAID Group Utilization Cache Utilization ACP Utilization CHIP Utilization Journal Pool Utilization ThP Pool Occupancy Snapshot Pool Occupancy P9500/XP7 disk array: Array Performance LDEV IO RAID Group Utilization Cache Utilization MP Blade Utilization Journal Pool Utilization ThP Pool Occupancy Snapshot Pool Occupancy Export Database report 333

334 IMPORTANT: The All report type for an XP or an XP7 disk array includes reports on the journal pool utilization, ThP pool, and the snapshot pool occupancy, only if they are configured in the selected XP or the XP7 disk array Individual CHIP MP utilization charts have been removed from the All report. MP blade utilization report The MP Blade Utilization report can be generated only for the P9500/XP7 disk arrays. It includes the average utilization data for each individual MP blade, their top 20 consumers, and the associated processing types. Average utilization of an MP Blade The average utilization is calculated as the utilization of all the individual processors in the MP Blade. MP Blade Utilization by top resources The average utilization of an MP Blade by the top 20 consumers is displayed in a chart for the selected duration. The top 20 consumers can be LDEVs, continuous access journal groups, or the E-LUNs (external volumes). For more information, see View MP Blade utilization summary for XP7 disk arrays on page MP blade utilization report

335 MP Blade Utilization by the processing types The average MP Blade Utilization Splitup for the different processing types is displayed in a chart for the selected duration. The duration for which the MP blade was busy processing consumer requests is also displayed as the Total Busy Time. MP Blade Utilization by the processing types 335

HP StorageWorks XP Performance Advisor Software User Guide

HP StorageWorks XP Performance Advisor Software User Guide HP StorageWorks XP Performance Advisor Software User Guide This guide describes how to use HP StorageWorks XP Performance Advisor Software product (XP Performance Advisor), and includes the user tasks

More information

HPE XP7 Performance Advisor Software 7.2 Release Notes

HPE XP7 Performance Advisor Software 7.2 Release Notes HPE XP7 Performance Advisor Software 7.2 Release Notes Part Number: T1789-96464a Published: December 2017 Edition: 2 Copyright 1999, 2017 Hewlett Packard Enterprise Development LP Notices The information

More information

HP XP7 Performance Advisor Software Installation Guide (v6.1.1)

HP XP7 Performance Advisor Software Installation Guide (v6.1.1) HP XP7 Performance Advisor Software Installation Guide (v6.1.1) Abstract This document describes how to install and configure the HP XP7 Performance Advisor Software. This document is intended for users

More information

StoreServ Management Console 3.2 User Guide

StoreServ Management Console 3.2 User Guide StoreServ Management Console 3.2 User Guide Abstract This user guide provides information on the use of an installed instance of HPE 3PAR StoreServ Management Console software. For information on installation

More information

HPE Intelligent Management Center

HPE Intelligent Management Center HPE Intelligent Management Center Service Health Manager Administrator Guide Abstract This guide provides introductory, configuration, and usage information for Service Health Manager (SHM). It is for

More information

HPE 3PAR StoreServ Management Console 3.0 User Guide

HPE 3PAR StoreServ Management Console 3.0 User Guide HPE 3PAR StoreServ Management Console 3.0 User Guide Abstract This user guide provides information on the use of an installed instance of HPE 3PAR StoreServ Management Console software. For information

More information

HP Operations Manager

HP Operations Manager HP Operations Manager Software Version: 9.22 UNIX and Linux operating systems Java GUI Operator s Guide Document Release Date: December 2016 Software Release Date: December 2016 Legal Notices Warranty

More information

Pure Storage FlashArray Management Pack for VMware vrealize Operations Manager User Guide. (Version with Purity 4.9.

Pure Storage FlashArray Management Pack for VMware vrealize Operations Manager User Guide. (Version with Purity 4.9. Pure Storage FlashArray Management Pack for VMware vrealize Operations Manager User Guide (Version 1.0.139 with Purity 4.9.x or higher) Sunday, November 27, 2016 16:13 Pure Storage FlashArray Management

More information

XP7 Online Migration User Guide

XP7 Online Migration User Guide XP7 Online Migration User Guide Abstract This guide explains how to perform an Online Migration. Part Number: 858752-002 Published: June 2016 Edition: 6 Copyright 2014, 2016 Hewlett Packard Enterprise

More information

StoreServ Management Console 3.3 User Guide

StoreServ Management Console 3.3 User Guide StoreServ Management Console 3.3 User Guide Abstract This user guide provides information on the use of an installed instance of HPE 3PAR StoreServ Management Console software. For information on installation

More information

Installation Guide. OMi Management Pack for Microsoft Skype for Business Server. Software Version: 1.00

Installation Guide. OMi Management Pack for Microsoft Skype for Business Server. Software Version: 1.00 OMi Management Pack for Microsoft Skype for Business Server Software Version: 1.00 For Operations Manager i for Linux and Windows operating systems Installation Guide Document Release Date: July 2017 Software

More information

HP Service Manager. Software Version: 9.41 For the supported Windows and UNIX operating systems. SM Reports help topics for printing

HP Service Manager. Software Version: 9.41 For the supported Windows and UNIX operating systems. SM Reports help topics for printing HP Service Manager Software Version: 9.41 For the supported Windows and UNIX operating systems SM Reports help topics for printing Document Release Date: September 2015 Software Release Date: September

More information

OMi Management Pack for Microsoft SQL Server. Software Version: For the Operations Manager i for Linux and Windows operating systems.

OMi Management Pack for Microsoft SQL Server. Software Version: For the Operations Manager i for Linux and Windows operating systems. OMi Management Pack for Microsoft Software Version: 1.01 For the Operations Manager i for Linux and Windows operating systems User Guide Document Release Date: April 2017 Software Release Date: December

More information

HP Intelligent Management Center v7.1 Branch Intelligent Management System Administrator Guide

HP Intelligent Management Center v7.1 Branch Intelligent Management System Administrator Guide HP Intelligent Management Center v7.1 Branch Intelligent Management System Administrator Guide Abstract This document describes how to administer the HP IMC Branch Intelligent Management System. HP Part

More information

HP Business Service Management

HP Business Service Management HP Business Service Management Software Version: 9.26 Getting Started With BPM - Best Practices Document Release Date: September 2015 Software Release Date: September 2015 Legal Notices Warranty The only

More information

ZENworks 2017 Audit Management Reference. December 2016

ZENworks 2017 Audit Management Reference. December 2016 ZENworks 2017 Audit Management Reference December 2016 Legal Notice For information about legal notices, trademarks, disclaimers, warranties, export and other use restrictions, U.S. Government rights,

More information

HP ALM. Software Version: patch 2. Business Views Microsoft Excel Add-in User Guide

HP ALM. Software Version: patch 2. Business Views Microsoft Excel Add-in User Guide HP ALM Software Version: 12.21 patch 2 Business Views Microsoft Excel Add-in User Guide Document Release Date: September 2016 Software Release Date: September 2016 Legal Notices Warranty The only warranties

More information

HP ALM Lab Management

HP ALM Lab Management HP ALM Lab Management Software Version: 12.00 Lab Management Guide Document Release Date: March 2014 Software Release Date: March 2014 Legal Notices Warranty The only warranties for HP products and services

More information

HPE StoreEver MSL6480 Tape Library CLI Utility Version 1.0 User Guide

HPE StoreEver MSL6480 Tape Library CLI Utility Version 1.0 User Guide HPE StoreEver MSL6480 Tape Library CLI Utility Version 1.0 User Guide Abstract This document explains how to install and use the HPE StoreEver MSL6480 Tape Library CLI utility, which provides a non-graphical

More information

HP XP7 Provisioning for Mainframe Systems User Guide

HP XP7 Provisioning for Mainframe Systems User Guide HP XP7 Provisioning for Mainframe Systems User Guide Abstract This document describes and provides instructions for using the provisioning software to configure and perform operations on HP XP7 Storage

More information

HP 3PAR OS MU3 Patch 18 Release Notes

HP 3PAR OS MU3 Patch 18 Release Notes HP 3PAR OS 3.2.1 MU3 Patch 18 Release Notes This release notes document is for Patch 18 and intended for HP 3PAR Operating System Software 3.2.1.292 (MU3). HP Part Number: QL226-98326 Published: August

More information

HPE Operations Agent. Concepts Guide. Software Version: For the Windows, HP-UX, Linux, Solaris, and AIX operating systems

HPE Operations Agent. Concepts Guide. Software Version: For the Windows, HP-UX, Linux, Solaris, and AIX operating systems HPE Operations Agent Software Version: 12.02 For the Windows, HP-UX, Linux, Solaris, and AIX operating systems Concepts Guide Document Release Date: December 2016 Software Release Date: December 2016 Legal

More information

HPE Operations Bridge Reporter

HPE Operations Bridge Reporter HPE Operations Bridge Reporter Software Version: 10.21 IBM Application Server Content Pack Reference Document Release Date: August 2017 Software Release Date: August 2017 Legal Notices Warranty The only

More information

HPE Intelligent Management Center

HPE Intelligent Management Center HPE Intelligent Management Center Branch Intelligent Management System Administrator Guide Abstract This document describes how to administer the HPE IMC Branch Intelligent Management System. Part number:

More information

HP Intelligent Management Center Remote Site Management User Guide

HP Intelligent Management Center Remote Site Management User Guide HP Intelligent Management Center Remote Site Management User Guide Abstract This book provides overview and procedural information for Remote Site Management, an add-on service module to the Intelligent

More information

HPE Intelligent Management Center v7.3

HPE Intelligent Management Center v7.3 HPE Intelligent Management Center v7.3 Service Operation Manager Administrator Guide Abstract This guide contains comprehensive conceptual information for network administrators and other personnel who

More information

HP ALM. Software Version: Tutorial

HP ALM. Software Version: Tutorial HP ALM Software Version: 12.20 Tutorial Document Release Date: December 2014 Software Release Date: December 2014 Legal Notices Warranty The only warranties for HP products and services are set forth in

More information

HP ALM Performance Center

HP ALM Performance Center HP ALM Performance Center Software Version: 12.53 Quick Start Document Release Date: May 2016 Software Release Date: May 2016 Legal Notices Warranty The only warranties for Hewlett Packard Enterprise Development

More information

ALM. Tutorial. Software Version: Go to HELP CENTER ONLINE

ALM. Tutorial. Software Version: Go to HELP CENTER ONLINE ALM Software Version: 12.55 Tutorial Go to HELP CENTER ONLINE http://admhelp.microfocus.com/alm/ Document Release Date: August 2017 Software Release Date: August 2017 ALM Legal Notices Disclaimer Certain

More information

HPE Intelligent Management Center

HPE Intelligent Management Center HPE Intelligent Management Center VAN Connection Manager Administrator Guide Abstract This guide contains comprehensive information for network administrators, engineers, and operators who manage the VAN

More information

IDOL Site Admin. Software Version: User Guide

IDOL Site Admin. Software Version: User Guide IDOL Site Admin Software Version: 11.5 User Guide Document Release Date: October 2017 Software Release Date: October 2017 Legal notices Warranty The only warranties for Hewlett Packard Enterprise Development

More information

HP ProLiant Agentless Management Pack (v 3.2) for Microsoft System Center User Guide

HP ProLiant Agentless Management Pack (v 3.2) for Microsoft System Center User Guide HP ProLiant Agentless Management Pack (v 3.2) for Microsoft System Center User Guide Abstract This guide provides information on using the HP ProLiant Agentless Management Pack for System Center version

More information

HP Insight Remote Support Advanced HP StorageWorks P4000 Storage System

HP Insight Remote Support Advanced HP StorageWorks P4000 Storage System HP Insight Remote Support Advanced HP StorageWorks P4000 Storage System Migration Guide HP Part Number: 5900-1089 Published: August 2010, Edition 1 Copyright 2010 Hewlett-Packard Development Company, L.P.

More information

HPE 3PAR OS MU3 Patch 18 Upgrade Instructions

HPE 3PAR OS MU3 Patch 18 Upgrade Instructions HPE 3PAR OS 3.1.3 MU3 Patch 18 Upgrade Instructions This upgrade instructions document is for installing Patch 18 on the HPE 3PAR Operating System Software 3.1.3.334 (MU3). This document is for Hewlett

More information

HPE StoreEver Command View TL User Guide

HPE StoreEver Command View TL User Guide HPE StoreEver Command View TL 5.0.00 User Guide Part Number: 344841-032 Published: March 2016 Edition: 1 Copyright 2003, 2016 Hewlett Packard Enterprise Development LP The information contained herein

More information

HPE 1/8 G2 Tape Autoloader and MSL Tape Libraries Encryption Kit User Guide

HPE 1/8 G2 Tape Autoloader and MSL Tape Libraries Encryption Kit User Guide HPE 1/8 G2 Tape Autoloader and MSL Tape Libraries Encryption Kit User Guide Abstract This guide provides information about developing encryption key management processes, configuring the tape autoloader

More information

HPE 3PAR OS GA Patch 12

HPE 3PAR OS GA Patch 12 HPE 3PAR OS 3.3.1 GA Patch 12 Upgrade Instructions Abstract This upgrade instructions document is for installing Patch 12 on the HPE 3PAR Operating System Software OS-3.3.1.215-GA. This document is for

More information

HP UFT Connection Agent

HP UFT Connection Agent HP UFT Connection Agent Software Version: For UFT 12.53 User Guide Document Release Date: June 2016 Software Release Date: June 2016 Legal Notices Warranty The only warranties for Hewlett Packard Enterprise

More information

HP Intelligent Management Center Branch Intelligent Management System (BIMS) User Guide

HP Intelligent Management Center Branch Intelligent Management System (BIMS) User Guide HP Intelligent Management Center Branch Intelligent Management System (BIMS) User Guide Abstract This guide contains basic information for network administrators, engineers, and operators who use the Branch

More information

HP ALM. Software Version: Tutorial

HP ALM. Software Version: Tutorial HP ALM Software Version: 12.50 Tutorial Document Release Date: September 2015 Software Release Date: September 2015 Legal Notices Warranty The only warranties for HP products and services are set forth

More information

OMi Management Pack for Oracle Database. Software Version: Operations Manager i for Linux and Windows operating systems.

OMi Management Pack for Oracle Database. Software Version: Operations Manager i for Linux and Windows operating systems. OMi Management Pack for Oracle Database Software Version: 1.10 Operations Manager i for Linux and Windows operating systems User Guide Document Release Date: June 2017 Software Release Date: February 2014

More information

ALM Lab Management. Lab Management Guide. Software Version: Go to HELP CENTER ONLINE

ALM Lab Management. Lab Management Guide. Software Version: Go to HELP CENTER ONLINE ALM Lab Management Software Version: 12.55 Lab Management Guide Go to HELP CENTER ONLINE http://admhelp.microfocus.com/alm Document Release Date: August 2017 Software Release Date: August 2017 ALM Lab

More information

HPE Project and Portfolio Management Center

HPE Project and Portfolio Management Center HPE Project and Portfolio Management Center Software Version: 9.41 Getting Started Go to HELP CENTER ONLINE http://ppm-help.saas.hpe.com Document Release Date: March 2017 Software Release Date: March 2017

More information

HPE StoreVirtual OS v13.5 Release Notes

HPE StoreVirtual OS v13.5 Release Notes HPE StoreVirtual OS v13.5 Release Notes Part Number: 865552-006 Published: May 2017 Edition: 2 Contents Release notes...4 Description... 4 Platforms supported for this release... 4 Update recommendation...4

More information

Widgets for SAP BusinessObjects Business Intelligence Platform User Guide SAP BusinessObjects Business Intelligence platform 4.1 Support Package 2

Widgets for SAP BusinessObjects Business Intelligence Platform User Guide SAP BusinessObjects Business Intelligence platform 4.1 Support Package 2 Widgets for SAP BusinessObjects Business Intelligence Platform User Guide SAP BusinessObjects Business Intelligence platform 4.1 Support Package 2 Copyright 2013 SAP AG or an SAP affiliate company. All

More information

VMware vrealize Operations for Horizon Administration

VMware vrealize Operations for Horizon Administration VMware vrealize Operations for Horizon Administration vrealize Operations for Horizon 6.3 This document supports the version of each product listed and supports all subsequent versions until the document

More information

HP 3PAR OS MU1 Patch 11

HP 3PAR OS MU1 Patch 11 HP 3PAR OS 313 MU1 Patch 11 Release Notes This release notes document is for Patch 11 and intended for HP 3PAR Operating System Software HP Part Number: QL226-98041 Published: December 2014 Edition: 1

More information

HP Database and Middleware Automation

HP Database and Middleware Automation HP Database and Middleware Automation For Windows Software Version: 10.10 SQL Server Database Refresh User Guide Document Release Date: June 2013 Software Release Date: June 2013 Legal Notices Warranty

More information

SAP BusinessObjects Integration Option for Microsoft SharePoint Getting Started Guide

SAP BusinessObjects Integration Option for Microsoft SharePoint Getting Started Guide SAP BusinessObjects Integration Option for Microsoft SharePoint Getting Started Guide SAP BusinessObjects XI3.1 Service Pack 4 Copyright 2011 SAP AG. All rights reserved.sap, R/3, SAP NetWeaver, Duet,

More information

XP7 High Availability User Guide

XP7 High Availability User Guide XP7 High Availability User Guide Abstract HPE XP7 High Availability helps you create and maintain a synchronous copy of critical data in a remote location. This document describes and provides instructions

More information

HPE ALM Excel Add-in. Microsoft Excel Add-in Guide. Software Version: Go to HELP CENTER ONLINE

HPE ALM Excel Add-in. Microsoft Excel Add-in Guide. Software Version: Go to HELP CENTER ONLINE HPE ALM Excel Add-in Software Version: 12.55 Microsoft Excel Add-in Guide Go to HELP CENTER ONLINE http://alm-help.saas.hpe.com Document Release Date: August 2017 Software Release Date: August 2017 Legal

More information

HPE StoreVirtual OS Update Guide

HPE StoreVirtual OS Update Guide HPE StoreVirtual OS Update Guide Abstract This guide is intended for system administrators who are responsible for updating to the latest versions of software for StoreVirtual storage. Part Number: 865551-002

More information

HP P4000 SAN Solution User Guide

HP P4000 SAN Solution User Guide HP P4000 SAN Solution User Guide Abstract This guide provides information for configuring and using the HP SAN Solution. It includes hardware configuration and information about designing and implementing

More information

HPE Security Fortify WebInspect Enterprise Software Version: Windows operating systems. Installation and Implementation Guide

HPE Security Fortify WebInspect Enterprise Software Version: Windows operating systems. Installation and Implementation Guide HPE Security Fortify WebInspect Enterprise Software Version: 17.10 Windows operating systems Installation and Implementation Guide Document Release Date: May 2017 Software Release Date: April 2017 Legal

More information

User s Manual. Version 5

User s Manual. Version 5 User s Manual Version 5 Copyright 2017 Safeway. All rights reserved. No part of this publication may be reproduced, transmitted, transcribed, stored in a retrieval system, or translated into any language,

More information

HPE ilo mobile app for ios

HPE ilo mobile app for ios HPE ilo mobile app for ios User Guide Abstract The HPE ilo mobile app provides access to the remote console, web interface, and scripting features of HPE ProLiant servers. Part Number: 689175-004 Published:

More information

HPE VMware ESXi and vsphere 5.x, 6.x and Updates Getting Started Guide

HPE VMware ESXi and vsphere 5.x, 6.x and Updates Getting Started Guide HPE VMware ESXi and vsphere 5.x, 6.x and Updates Getting Started Guide Abstract This guide is intended to provide setup information for HPE VMware ESXi and vsphere. Part Number: 818330-003 Published: April

More information

Project and Portfolio Management Center

Project and Portfolio Management Center Project and Portfolio Management Center Software Version: 9.42 Application Portfolio Management Administrator Guide Go to HELP CENTER ONLINE http://admhelp.microfocus.com/ppm/ Document Release Date: July

More information

HPE Intelligent Management Center

HPE Intelligent Management Center HPE Intelligent Management Center EAD Security Policy Administrator Guide Abstract This guide contains comprehensive information for network administrators, engineers, and operators working with the TAM

More information

HPE Data Replication Solution Service for HPE Business Copy for P9000 XP Disk Array Family

HPE Data Replication Solution Service for HPE Business Copy for P9000 XP Disk Array Family Data sheet HPE Data Replication Solution Service for HPE Business Copy for P9000 XP Disk Array Family HPE Lifecycle Event Services HPE Data Replication Solution Service provides implementation of the HPE

More information

Project and Portfolio Management Center

Project and Portfolio Management Center Project and Portfolio Management Center Software Version: 9.42 Getting Started Go to HELP CENTER ONLINE http://admhelp.microfocus.com/ppm/ Document Release Date: September 2017 Software Release Date: September

More information

HP Real User Monitor. Software Version: Real User Monitor Sizing Guide

HP Real User Monitor. Software Version: Real User Monitor Sizing Guide HP Real User Monitor Software Version: 9.26 Real User Monitor Sizing Guide Document Release Date: September 2015 Software Release Date: September 2015 Real User Monitor Sizing Guide Legal Notices Warranty

More information

Dell License Manager Version 1.2 User s Guide

Dell License Manager Version 1.2 User s Guide Dell License Manager Version 1.2 User s Guide Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your computer. CAUTION: A CAUTION indicates either

More information

XP7 External Storage for Open and Mainframe Systems User Guide

XP7 External Storage for Open and Mainframe Systems User Guide XP7 External Storage for Open and Mainframe Systems User Guide Abstract This guide provides information and instructions for planning, setup, maintenance, and troubleshooting the use of external volumes

More information

SAP BusinessObjects Live Office User Guide SAP BusinessObjects Business Intelligence platform 4.1 Support Package 2

SAP BusinessObjects Live Office User Guide SAP BusinessObjects Business Intelligence platform 4.1 Support Package 2 SAP BusinessObjects Live Office User Guide SAP BusinessObjects Business Intelligence platform 4.1 Support Package 2 Copyright 2013 SAP AG or an SAP affiliate company. All rights reserved. No part of this

More information

HP XP7 Business Copy User Guide

HP XP7 Business Copy User Guide HP XP7 Business Copy User Guide Abstract This guide provides instructions for setting up, planning, and operating Business Copy on the HP XP7 Storage (HP XP7) system. Please read this document carefully

More information

HP JetAdvantage Security Manager. User Guide

HP JetAdvantage Security Manager. User Guide HP JetAdvantage Security Manager User Guide Copyright 2017 HP Development Company, L.P. Reproduction, adaptation, or translation without prior written permission is prohibited, except as allowed under

More information

XP7 High Availability User Guide

XP7 High Availability User Guide XP7 High Availability User Guide Abstract HPE XP7 High Availability helps you create and maintain a synchronous copy of critical data in a remote location. This document describes and provides instructions

More information

HPE ilo Federation User Guide for ilo 5

HPE ilo Federation User Guide for ilo 5 HPE ilo Federation User Guide for ilo 5 Abstract This guide explains how to configure and use the HPE ilo Federation features. It is intended for system administrators, Hewlett Packard Enterprise representatives,

More information

Business Intelligence Launch Pad User Guide SAP BusinessObjects Business Intelligence Platform 4.1 Support Package 1

Business Intelligence Launch Pad User Guide SAP BusinessObjects Business Intelligence Platform 4.1 Support Package 1 Business Intelligence Launch Pad User Guide SAP BusinessObjects Business Intelligence Platform 4.1 Support Package 1 Copyright 2013 SAP AG or an SAP affiliate company. All rights reserved. No part of this

More information

Operations Orchestration. Software Version: Windows and Linux Operating Systems. Central User Guide

Operations Orchestration. Software Version: Windows and Linux Operating Systems. Central User Guide Operations Orchestration Software Version: 10.70 Windows and Linux Operating Systems Central User Guide Document Release Date: November 2016 Software Release Date: November 2016 Legal Notices Warranty

More information

HPE 3PAR OS MU3 Patch 97 Upgrade Instructions

HPE 3PAR OS MU3 Patch 97 Upgrade Instructions HPE 3PAR OS 3.2.2 MU3 Patch 97 Upgrade Instructions Abstract This upgrade instructions document is for installing Patch 97 on the HPE 3PAR Operating System Software. This document is for Hewlett Packard

More information

Rapid Recovery License Portal Version User Guide

Rapid Recovery License Portal Version User Guide Rapid Recovery License Portal Version 6.1.0 User Guide 2017 Quest Software Inc. ALL RIGHTS RESERVED. This guide contains proprietary information protected by copyright. The software described in this guide

More information

HPE OneView Global Dashboard 1.40 User Guide

HPE OneView Global Dashboard 1.40 User Guide HPE OneView Global Dashboard 1.40 User Guide Abstract This user guide is intended for administrators who are using the HPE OneView Global Dashboard graphical user interface to monitor IT hardware in a

More information

HPE Storage Optimizer Software Version: 5.4. Best Practices Guide

HPE Storage Optimizer Software Version: 5.4. Best Practices Guide HPE Storage Optimizer Software Version: 5.4 Best Practices Guide Document Release Date: November 2016 Software Release Date: November 2016 Legal Notices Warranty The only warranties for Hewlett Packard

More information

HP integrated Citrix XenServer Online Help

HP integrated Citrix XenServer Online Help HP integrated Citrix XenServer Online Help Part Number 486855-002 September 2008 (Second Edition) Copyright 2008 Hewlett-Packard Development Company, L.P. The information contained herein is subject to

More information

HP Automation Insight

HP Automation Insight HP Automation Insight For the Red Hat Enterprise Linux and SUSE Enterprise Linux operating systems AI SA Compliance User Guide Document Release Date: July 2014 Software Release Date: July 2014 Legal Notices

More information

HPE 3PAR OS MU3 Patch 28 Release Notes

HPE 3PAR OS MU3 Patch 28 Release Notes HPE 3PAR OS 3.2.1 MU3 Patch 28 Release tes This release notes document is for Patch 28 and intended for HPE 3PAR Operating System Software 3.2.1.292 (MU3)+Patch 23. Part Number: QL226-99107 Published:

More information

HYCU SCOM Management Pack for Nutanix

HYCU SCOM Management Pack for Nutanix HYCU SCOM Management Pack for Nutanix Product version: 2.5 Product release date: May 2018 Document edition: First Legal notices Copyright notice 2016-2018 HYCU. All rights reserved. This document contains

More information

VMware vrealize Operations for Horizon Administration

VMware vrealize Operations for Horizon Administration VMware vrealize Operations for Horizon Administration vrealize Operations for Horizon 6.2 This document supports the version of each product listed and supports all subsequent versions until the document

More information

NETWORK PRINT MONITOR User Guide

NETWORK PRINT MONITOR User Guide NETWORK PRINT MONITOR User Guide Legal Notes Unauthorized reproduction of all or part of this guide is prohibited. The information in this guide is subject to change for improvement without notice. We

More information

Online Help StruxureWare Data Center Expert

Online Help StruxureWare Data Center Expert Online Help StruxureWare Data Center Expert Version 7.2.7 What's New in StruxureWare Data Center Expert 7.2.x Learn more about the new features available in the StruxureWare Data Center Expert 7.2.x release.

More information

HPE Aruba Airwave Installation and Startup Service

HPE Aruba Airwave Installation and Startup Service Data sheet HPE Aruba Airwave Installation and Startup Service Support Services HPE Installation and Startup Service for select Aruba Airwave products coordinates installation, configuration, and verification

More information

HP OneView for VMware vcenter User Guide

HP OneView for VMware vcenter User Guide HP OneView for VMware vcenter User Guide Abstract This document contains detailed instructions for configuring and using HP OneView for VMware vcenter (formerly HP Insight Control for VMware vcenter Server).

More information

HP IDOL Site Admin. Software Version: Installation Guide

HP IDOL Site Admin. Software Version: Installation Guide HP IDOL Site Admin Software Version: 10.9 Installation Guide Document Release Date: March 2015 Software Release Date: March 2015 Legal Notices Warranty The only warranties for HP products and services

More information

HP StorageWorks. EVA Virtualization Adapter administrator guide

HP StorageWorks. EVA Virtualization Adapter administrator guide HP StorageWorks EVA Virtualization Adapter administrator guide Part number: 5697-0177 Third edition: September 2009 Legal and notice information Copyright 2008-2009 Hewlett-Packard Development Company,

More information

HP LeftHand SAN Solutions

HP LeftHand SAN Solutions HP LeftHand SAN Solutions Support Document Installation Manuals VSA 8.0 Quick Start - Demo Version Legal Notices Warranty The only warranties for HP products and services are set forth in the express warranty

More information

vcenter Operations Manager for Horizon View Administration

vcenter Operations Manager for Horizon View Administration vcenter Operations Manager for Horizon View Administration vcenter Operations Manager for Horizon View 1.5 vcenter Operations Manager for Horizon View 1.5.1 This document supports the version of each product

More information

Veeam ONE. Version 8.0. User Guide for VMware vsphere Environments

Veeam ONE. Version 8.0. User Guide for VMware vsphere Environments Veeam ONE Version 8.0 User Guide for VMware vsphere Environments July, 2015 2015 Veeam Software. All rights reserved. All trademarks are the property of their respective owners. No part of this publication

More information

HP P4000 Remote Copy User Guide

HP P4000 Remote Copy User Guide HP P4000 Remote Copy User Guide Abstract This guide provides information about configuring and using asynchronous replication of storage volumes and snapshots across geographic distances. For the latest

More information

Tuning Manager Software

Tuning Manager Software Hitachi Command Suite Tuning Manager Software Getting Started Guide FASTFIND LINKS Document Organization Product Version Getting Help Contents MK-96HC120-08 Copyright 2010 Hitachi Ltd., Hitachi Data Systems

More information

HP Matrix Operating Environment 7.1 Getting Started Guide

HP Matrix Operating Environment 7.1 Getting Started Guide HP Matrix Operating Environment 7.1 Getting Started Guide Abstract This document provides an overview of the HP Matrix Operating Environment. It is intended to be used by system administrators and other

More information

BIG-IP Analytics: Implementations. Version 13.1

BIG-IP Analytics: Implementations. Version 13.1 BIG-IP Analytics: Implementations Version 13.1 Table of Contents Table of Contents Setting Up Application Statistics Collection...5 What is Analytics?...5 About HTTP Analytics profiles... 5 Overview:

More information

Isilon InsightIQ. Version User Guide

Isilon InsightIQ. Version User Guide Isilon InsightIQ Version 4.1.1 User Guide Copyright 2009-2017 Dell Inc. or its subsidiaries. All rights reserved. Published January 2017 Dell believes the information in this publication is accurate as

More information

HP Virtual Connect Enterprise Manager

HP Virtual Connect Enterprise Manager HP Virtual Connect Enterprise Manager Data Migration Guide HP Part Number: 487488-001 Published: April 2008, first edition Copyright 2008 Hewlett-Packard Development Company, L.P. Legal Notices Confidential

More information

LifeSize Control Installation Guide

LifeSize Control Installation Guide LifeSize Control Installation Guide January 2009 Copyright Notice 2005-2009 LifeSize Communications Inc, and its licensors. All rights reserved. LifeSize Communications has made every effort to ensure

More information

Operations Manager Guide

Operations Manager Guide Operations Manager Guide Version: 10.10 10.10, December 2017 Copyright 2017 by MicroStrategy Incorporated. All rights reserved. Trademark Information The following are either trademarks or registered trademarks

More information

Virtual Recovery Assistant user s guide

Virtual Recovery Assistant user s guide Virtual Recovery Assistant user s guide Part number: T2558-96323 Second edition: March 2009 Copyright 2009 Hewlett-Packard Development Company, L.P. Hewlett-Packard Company makes no warranty of any kind

More information

HPE 3PAR OS MU5 Patch 49 Release Notes

HPE 3PAR OS MU5 Patch 49 Release Notes HPE 3PAR OS 3.2.1 MU5 Patch 49 Release Notes This release notes document is for Patch 49 and intended for HPE 3PAR Operating System Software + P39. Part Number: QL226-99362a Published: October 2016 Edition:

More information

Configuring RAID with HP Z Turbo Drives

Configuring RAID with HP Z Turbo Drives Technical white paper Configuring RAID with HP Z Turbo Drives HP Workstations This document describes how to set up RAID on your HP Z Workstation, and the advantages of using a RAID configuration with

More information