SAP NetWeaver BW Performance on IBM i: Comparing SAP BW Aggregates, IBM i DB2 MQTs and SAP BW Accelerator

Similar documents
Performance Tuning BI on SAP NetWeaver Using DB2 for i5/os and i5 Navigator

HANA Performance. Efficient Speed and Scale-out for Real-time BI

Know How Network: SAP BW Performance Monitoring with BW Statistics

Tools, tips, and strategies to optimize BEx query performance for SAP HANA

Susanne Hess, Stefanie Lenz, Jochen Scheibler. Sales and Distribution Controlling with SAP. NetWeaver BI. Bonn Boston

Extending the Reach of LSA++ Using New SAP BW 7.40 Artifacts Pravin Gupta, TekLink International Inc. Bhanu Gupta, Molex SESSION CODE: BI2241

1) In the Metadata Repository:

How can a Reference Query Be used?

Performance Optimization

Lori Vanourek Product Management SAP NetWeaver / BI. Mike Eacrett SAP NetWeaver RIG - BI

This is a simple tutorial that covers the basics of SAP Business Intelligence and how to handle its various other components.

This is a simple tutorial that covers the basics of SAP Business Intelligence and how to handle its various other components.

Using SAP NetWeaver Business Intelligence in the universe design tool SAP BusinessObjects Business Intelligence platform 4.1

Overview of Reporting in the Business Information Warehouse

Built for Speed: Comparing Panoply and Amazon Redshift Rendering Performance Utilizing Tableau Visualizations

This download file shows detailed view for all updates from BW 7.5 SP00 to SP05 released from SAP help portal.

Evolution of Database Systems

Performance Tuning with the OLAP Cache

Using EMC FAST with SAP on EMC Unified Storage

Using Query Extract to Export Data from Business warehouse, With Pros and Cons Analyzed

C_HANAIMP142

Best Practices. Deploying Optim Performance Manager in large scale environments. IBM Optim Performance Manager Extended Edition V4.1.0.

C_TBI30_74

TBW60. BW: Operations and Performance COURSE OUTLINE. Course Version: 10 Course Duration: 5 Day(s)

Service Description. IBM DB2 on Cloud. 1. Cloud Service. 1.1 IBM DB2 on Cloud Standard Small. 1.2 IBM DB2 on Cloud Standard Medium

1 Dulcian, Inc., 2001 All rights reserved. Oracle9i Data Warehouse Review. Agenda

QLIKVIEW SCALABILITY BENCHMARK WHITE PAPER

Foreword 7. Acknowledgments 9. 1 Evolution and overview The evolution of SAP HANA The evolution of BW 17

BW310. BW - Enterprise Data Warehousing COURSE OUTLINE. Course Version: 10 Course Duration: 5 Day(s)

SAP BW Archiving with Nearline Storage at Esprit

Exadata Implementation Strategy

Cognos Dynamic Cubes

Reduce Costs & Increase Oracle Database OLTP Workload Service Levels:

IBM Terms of Use SaaS Specific Offering Terms. IBM DB2 on Cloud. 1. IBM SaaS. 2. Charge Metrics

SAP HANA Scalability. SAP HANA Development Team

QLogic TrueScale InfiniBand and Teraflop Simulations

SAP HANA. Jake Klein/ SVP SAP HANA June, 2013

#mstrworld. Analyzing Multiple Data Sources with Multisource Data Federation and In-Memory Data Blending. Presented by: Trishla Maru.

Data Warehouse Tuning. Without SQL Modification

SAP NetWeaver BW 7.3 Practical Guide

SugarCRM on IBM i Performance and Scalability TECHNICAL WHITE PAPER

IBM DB2 LUW Performance Tuning and Monitoring for Single and Multiple Partition DBs

Course Contents: 1 Business Objects Online Training

IBM s Data Warehouse Appliance Offerings

Entscheidungsgrundlagen mit Proof-of-Concept SAP HANA

Root Cause Analysis for SAP HANA. June, 2015

Industry Leading BI Performance With System i and DB2 for i5/os Using BI on SAP NetWeaver. Susan Bestgen System i ERP SAP team

OLAP Introduction and Overview

DB Partitioning & Compression

Preface 7. 1 Data warehousing and database technologies 9

Separating Fact from Fiction: SAP HANA and Oracle Benchmarking Claims

Microsoft SQL Server 2012 Fast Track Reference Configuration Using PowerEdge R720 and EqualLogic PS6110XV Arrays

Oracle 1Z0-515 Exam Questions & Answers

Validating the NetApp Virtual Storage Tier in the Oracle Database Environment to Achieve Next-Generation Converged Infrastructures

Satisfy the Business Using Db2 Web Query

Lenovo Database Configuration for Microsoft SQL Server TB

Reconcile Data Between SAP Source Systems and SAP BW BUSINESS INFORMATION WAREHOUSE

Lenovo Database Configuration

Analysis Process Designer (APD) Step by Step Business Intelligence

CIS 601 Graduate Seminar. Dr. Sunnie S. Chung Dhruv Patel ( ) Kalpesh Sharma ( )

Performance Tuning in SAP BI 7.0

This tutorial will help computer science graduates to understand the basic-to-advanced concepts related to data warehousing.

Information Management course

20 technical tips and tricks to speed SAP NetWeaver Business Intelligence query, report, and dashboard performance Dr. Bjarne Berg

Segregating Data Within Databases for Performance Prepared by Bill Hulsizer

Accelerate Applications Using EqualLogic Arrays with directcache

... IBM Power Systems with IBM i single core server tuning guide for JD Edwards EnterpriseOne

Syllabus. Syllabus. Motivation Decision Support. Syllabus

Realtests.C_TBW45_70.80 Questions

Appliances and DW Architecture. John O Brien President and Executive Architect Zukeran Technologies 1

Data Warehousing (1)

SAP Business Warehouse powered by SAP HANA

BI, Big Data, Mission Critical. Eduardo Rivadeneira Specialist Sales Manager

SAP Business Information Warehouse Functions in Detail. Version 4.0 SAP BW 3.5 November 2004

Innovations in Business Solutions. SAP Analytics, Data Modeling and Reporting Course

Oracle Exadata: The World s Fastest Database Machine

FIVE BEST PRACTICES FOR ENSURING A SUCCESSFUL SQL SERVER MIGRATION

EDWH Architecture for Global Data Loading Strategy

Sync Services. Server Planning Guide. On-Premises

Storwize/IBM Technical Validation Report Performance Verification

Teradata Aggregate Designer

BI (Business Intelligence)

Reading Sample. Introduction to SAP BW on SAP HANA SAP HANA Architecture. Contents. Index. The Authors. Implementing SAP BW on SAP HANA

SQL Server Analysis Services

IBM Db2 Analytics Accelerator Version 7.1

Service Description. IBM DB2 on Cloud. 1. Cloud Service. 1.1 IBM DB2 on Cloud Standard Small. 1.2 IBM DB2 on Cloud Standard Medium

OBT Global presents. SAP Business Information Warehouse. -an overview -

Welcome to the Learning Objekt Operational Analytics with Operational Data Providers. After the explanations of the entire ODP Architecture and the

... IBM Advanced Technical Skills IBM Oracle International Competency Center September 2013

Microsoft SQL Server Training Course Catalogue. Learning Solutions

Evolving To The Big Data Warehouse

SAP HANA SAP HANA Introduction Description:

Optimizing and Modeling SAP Business Analytics for SAP HANA. Iver van de Zand, Business Analytics

What are Specifics Concerning the Creation of New Master Data?

Create Cube From Star Schema Grouping Framework Manager

Designing dashboards for performance. Reference deck

Data Warehousing and Decision Support. Introduction. Three Complementary Trends. [R&G] Chapter 23, Part A

Benefits of Automatic Data Tiering in OLTP Database Environments with Dell EqualLogic Hybrid Arrays

CSE 544 Principles of Database Management Systems. Alvin Cheung Fall 2015 Lecture 8 - Data Warehousing and Column Stores

Coca-Cola Bottling Co. Consolidated utilizes SAP technical upgrade project to migrate from Oracle to IBM DB2

Transcription:

SAP NetWeaver BW Performance on IBM i: Comparing SAP BW Aggregates, IBM i DB2 MQTs and SAP BW Accelerator By Susan Bestgen IBM i OS Development, SAP on i

Introduction The purpose of this paper is to demonstrate the capabilities of three solutions to optimize and increase performance of SAP NetWeaver BW TM performance on IBM i TM, namely: SAP BW aggregates, DB2 for IBM i TM Materialized Query Tables (MQTs) and the SAP BW Accelerator (BWA). Using a workload similar to the SAP MXL benchmark, all three solutions were implemented, tested and performance results compared. First, an overview of the hardware landscape is provided. The workload definition is then defined, along with brief descriptions of the three performance enhancing implementations. The results section reports the performance of each solution in both a query-only test and query+delta data load test. When determining the best solution for a given installation, these results can help identify tradeoffs between performance, implementation effort and cost. Landscape Definition The central system in the hardware landscape was an IBM Power 6 Model 595 running V7.1 with DB2 for i as the integrated database. SAP NetWeaver 7.0 was installed in a central instance configuration and the workload data was loaded. The initial goal of the test was to hold memory constant in all test environments at 64GB, but additional memory was needed to achieve desired performance for some of the tests. To simulate concurrent users, a workload driver system (a single partition on an IBM Power 5 Model 595) running Linux was loaded with the SAP benchmark driver toolkit. The BW Accelerator was powered by four IBM HS22 Blades using IBM s General Parallel File System (GPFS). The BW Accelerator was attached to the central configuration with a dedicated gigabit ethernet line. Figure 1 illustrates the landscape definition. All three test environments were configured and active at the same time. To test independently, each could be switched on or off for query use via tools shipped with the respective product. http://w3.ibm.com/support/techdocs Page 2 of 15

BW Accelerator 4 HS22 Blades, each with (2 x) 2.93 GHz processors 48 GB total memory per blade (192 GB cluster total) 713 GB shared storage 64 bit Linux 2.6.16.60 IBM GPFS SAP NetWeaver BW 7.0 Dedicated GB ethernet Central configuration IBM POWER 6 Model 595 5.0 GHz IBM i V7.1 with DB2 for i SAP NetWeaver BW 7.0 4 node partition 64GB / 96GB main memory 300 million rows across 10 infocubes 48 drives / 3 TB Workload Driver IBM POWER 5 Model 595 running the SAP benchmark driver Figure 1. Landscape Definition Workload Definition The key workload metric was SAP query navigation steps per hour (qns/hr 1 ). The workload data configuration consisted of a data warehouse multiprovider comprised of ten infocubes. Each cube s fact table contains one year s worth of data, which equals 30,000,000 rows. Two test points were measured: 1. Query Only: Simultaneous end users concurrently running a series of 11 queries of varying complexity. 2. Query + Data Load: The same query load with all simulated end users active for at least one hour. In addition, 3 data loads scheduled at the beginning, 20 minutes and 40 minutes into the high use interval (the time all workload users are signed on and actively running the query suite) extended the static operational data with delta data. Each delta load added 10,000 rows per cube. 1 This is defined as the throughput value of the workload. A query is a combination of characteristics and key figures (InfoObjects) for the analysis of the data of an InfoProvider. The query data can be displayed in different views. A view change through a user interaction is considered to be a navigation step. http://w3.ibm.com/support/techdocs Page 3 of 15

SAP s benchmark workload toolkit was used to drive the workload from a Linux partition on the workload driver system. The goal was to maximize qns/hr while driving the CPU of the central install to 90% or greater. When running the delta loads, the goal was to complete each delta load in less than 20 minutes. A sample query from the workload follows: SELECT "D3". "/B49/S_DIVISION" AS "S 022","DU". "/B49/S_BASE_UOM" AS "S 020","DU". "/B49/S_STAT_CURR" AS "S 031", SUM ( "F". "/B49/S_CRMEM_CST" ) AS "Z 043", SUM ( "F". "/B49/S_CRMEM_QTY" ) AS "Z 044", SUM ( "F". "/B49/S_CRMEM_VAL" ) AS "Z 045", SUM ( "F". "/B49/S_INCORDCST" ) AS "Z 046", SUM ( "F". "/B49/S_INCORDQTY" ) AS "Z 047", SUM ( "F". "/B49/S_INCORDVAL" ) AS "Z 048", SUM ( "F". "/B49/S_INVCD_CST" ) AS "Z 049", SUM ( "F". "/B49/S_INVCD_QTY" ) AS "Z 050", SUM ( "F". "/B49/S_INVCD_VAL" ) AS "Z 051", SUM ( "F". "/B49/S_OPORDQTYB" ) AS "Z 052", SUM ( "F". "/B49/S_OPORDVALS" ) AS "Z 053", SUM ( "F". "/B49/S_ORD_ITEMS" ) AS "Z 054", SUM ( "F". "/B49/S_RTNSCST" ) AS "Z 055", SUM ( "F". "/B49/S_RTNSQTY" ) AS "Z 056", SUM ( "F". "/B49/S_RTNSVAL" ) AS "Z 057", SUM ( "F". "/B49/S_RTNS_ITEM" ) AS "Z 058", COUNT( * ) AS "Z 059" FROM "/B49/EBENCH05" "F" JOIN "/B49/DBENCH05U" "DU" ON "F". "KEY_BENCH05U" = "DU". "DIMID" JOIN "/B49/DBENCH051" "D1" ON "F". "KEY_BENCH051" = "D1". "DIMID" JOIN "/B49/XCUSTOMER" "X2" ON "D1". "/B49/S_SOLD_TO" = "X2". "SID" JOIN "/B49/DBENCH05T" "DT" ON "F". "KEY_BENCH05T" = "DT". "DIMID" JOIN "/B49/DBENCH05P" "DP" ON "F". "KEY_BENCH05P" = "DP". "DIMID" JOIN "/B49/DBENCH053" "D3" ON "F". "KEY_BENCH053" = "D3". "DIMID" JOIN "/B49/SSALESORG" "S1" ON "D3". "/B49/S_SALESORG" = "S1". "SID" WHERE ( ( ( ( "S1". "/B49/S_SALESORG" = 'B310' ) ) AND ( ( "X2". "/B49/S_COUNTRY" = 13 ) ) AND ( ( "DT". "SID_0CALMONTH" BETWEEN 200501 AND 200512 ) ) AND ( ( "DP". "SID_0CHNGID" = 0 ) ) AND ( ( "DT". "SID_0FISCVARNT" = 5 ) ) AND ( ( "DP". "SID_0RECORDTP" = 0 ) ) AND ( ( "DP". "SID_0REQUID" <= 2267 ) ) ) ) AND "X2". "OBJVERS" = 'A' GROUP BY "D3". "/B49/S_DIVISION","DU". "/B49/S_BASE_UOM","DU". "/B49/S_STAT_CURR" OPTIMIZE FOR ALL ROWS ; http://w3.ibm.com/support/techdocs Page 4 of 15

All queries in the workload were similar to the sample, varying in the number of files joined and complexity of the selection criteria. This mimics an ad hoc drill down situation where a user starts from gathering data warehouse information and then narrows in on data specifics. Test Environments Performance data was collected running the workload in four environments: baseline, SAP Aggregates, DB2 for i Materialized Query Tables, and Business Warehouse Accelerator. Hardware configuration remained the same for all environments except where explicitly noted. The only additional optimizations performed were to provide indexes to support each scenario (where necessary) and modifications to the delta load process chains unique to each test environment. Baseline The baseline test consisted of a basic install of the benchmark environment and benchmark driver toolkit. Sample runs were performed which enabled the DB2 for i query optimizer to suggest indexes to improve query performance on the complex joins. These additional indexes over the infocube fact tables were created, and the baseline run was collected to establish query only performance in a nonperformance optimized setting. This data point helped quantify the improvements shown in the query phase of the subsequent test scenarios. SAP Aggregates An SAP aggregate is a materialized, aggregated view of the data in an InfoCube. With an aggregate, the dataset of an InfoCube is saved redundantly and persistently in a consolidated form into the database. Aggregates make it possible to improve the query performance in a similar way to database indexes or database summary tables. SAP NetWeaver will automatically detect the presence of an aggregate, compare it to a query, and transform the query to use the aggregate if it is a match for the query (consistent with a subset of the join and selection criteria). The transformed query is then passed along to the database for completion. The screenshot in Figure 2 shows a sample aggregate definition over characteristics Country, Division, Sales Organization, and Calendar Year/Month for InfoCube 01. Multiple aggregates were defined over each infocube to improve performance of this workload. In some cases, additional or more detailed aggregates could have been defined to improve query only performance. However, the maintenance of http://w3.ibm.com/support/techdocs Page 5 of 15

these aggregates during the query + data load test was found to be too excessive 2 to achieve desired results in terms of response time and overall throughput. Figure 2. Aggregate Maintenance Screenshot (SAP transaction code rsa1 -> Maintain Aggregates) DB2 for i Materialized Query Tables (MQTs) MQTs are DB2 tables that contain the results of a query along with the query s definition. The MQT can be substituted for one or more base tables in the query by the DB2 query optimizer. A matching MQT implies the MQT definition contains all or a subset of the selection criteria specified in the query and it therefore holds the applicable data needed by the query. Materialized query tables have the potential to greatly improve response time for complex queries by storing precomputed results for complex tasks such as joins, aggregation and selection. The MQT definition that parallels the SAP aggregate in the previous section would be defined as follows: CREATE TABLE MQT_SAMPLE AS ( SELECT "X2". "/B49/S_COUNTRY" AS COUNTRY, "S1". "/B49/S_SALESORG" AS SALESORG, "D3". "/B49/S_DIVISION" AS DIVISION, SUM ( "F". "/B49/S_CRMEM_CST" ) AS CRMEM_CST, SUM ( "F". "/B49/S_CRMEM_QTY" ) AS CRMEM_QTY, 2 A maximum amount of time was desired during the data load phase. Since the total data load time includes the time needed refresh all aggregates to reflect the newly loaded data, the number and complexity of aggregates defined had to be balanced with the overall data load maximum time window. http://w3.ibm.com/support/techdocs Page 6 of 15

SUM ( "F". "/B49/S_CRMEM_VAL" ) AS CRMEM_VAL, SUM ( "F". "/B49/S_INCORDCST" ) AS INCORDCST, SUM ( "F". "/B49/S_INCORDQTY" ) AS INCORDQTY, SUM ( "F". "/B49/S_INCORDVAL" ) AS INCORDVAL, SUM ( "F". "/B49/S_INVCD_CST" ) AS INVCD_CST, SUM ( "F". "/B49/S_INVCD_QTY" ) AS INVCD_QTY, SUM ( "F". "/B49/S_INVCD_VAL" ) AS INVCD_VAL, SUM ( "F". "/B49/S_OPORDQTYB" ) AS OPORDQTYB, SUM ( "F". "/B49/S_OPORDVALS" ) AS OPORDVALS, SUM ( "F". "/B49/S_ORD_ITEMS" ) AS ORD_ITEMS, SUM ( "F". "/B49/S_RTNSCST" ) AS RTNSCST, SUM ( "F". "/B49/S_RTNSQTY" ) AS RTNSQTY, SUM ( "F". "/B49/S_RTNSVAL" ) AS RTNSVAL, SUM ( "F". "/B49/S_RTNS_ITEM" ) AS RTNS_ITEM, COUNT( * ) AS RECCOUNT FROM "/B49/EBENCH01" "F" JOIN "/B49/DBENCH011" "D1" ON "F". "KEY_BENCH011" = "D1". "DIMID" JOIN "/B49/DBENCH01T" "DT" ON "F". "KEY_BENCH01T" = "DT". "DIMID" JOIN "/B49/DBENCH013" "D3" ON "F". "KEY_BENCH013" = "D3". "DIMID" JOIN "/B49/SSALESORG" "S1" ON "D3". "/B49/S_SALESORG" = "S1". "SID" JOIN "/B49/XCUSTOMER" "X2" ON "D1". "/B49/S_SOLD_TO" = "X2". "SID" GROUP BY "X2". "/B49/S_COUNTRY", "S1". "/B49/S_SALESORG", "D3". "/B49/S_DIVISION" ) DATA INITIALLY IMMEDIATE REFRESH DEFERRED MAINTAINED BY USER ENABLE QUERY OPTIMIZATION ; Like aggregates, multiple MQTs for each InfoCube were defined to satisfy query requirements for maximizing performance in both the query only and query + data load tests. Business Warehouse Accelerator (BW Acclerator) The SAP NetWeaver BW Accelerator is an appliance a hardware and software bundle which serves to improve the performance of Business Warehouse search and analysis functions. Using SAP NetWeaver 7.0 as a base, the TREX search and classification engine builds up the BW Accelerator with BW Accelerator indexes. These compressed structures represent replicated BW star schema data and, once created, are used transparently by SAP NetWeaver. The BW Accelerator was comprised of four IBM HS22 blades. IBM s General Parallel File System (GPFS) was used to provide high performance access to a shared storage pool across all BW Accelerator blades. Once the appliance was attached and configured, the BW indexes were created from the Data Warehouse Workbench Modeling menu for each cube in the target MultiProvider. Figure 3 shows the modeling menu where Maintaining BI Acclerator Indexes can be selected, while Figure 4 shows the detail for the BIA Index Maintenance Wizard. Once built, the BWA indexes can be toggled on or off for query use: http://w3.ibm.com/support/techdocs Page 7 of 15

Figure 3. Data Warehouse Modeling BW Accelerator Maintenance screenshot (SAP transaction code rsa1) Figure 4. BIA Index Maintenance Wizard Screenshot (SAP transaction code rsa1 -> Maintain BI Accelerator Index) http://w3.ibm.com/support/techdocs Page 8 of 15

Results Two separate data points were measured: query only, and query concurrent with three delta loads (query + data load) timed to start at 20 minute intervals. High CPU utilization (>90%) on the central box was desired. The key metrics measured were: Overall SAP qns/hr (query navigation steps per hour) Overall SAP average response time (in seconds) Individual query average response times for the eleven data warehouse queries CPU on the central installation IBM i server The defined pre-test performance goals were: Collect metrics on the four defined scenarios for query only Collect metrics on the three performance enhancing tests for query plus delta data load No or minimal changes to the system configuration between tests Query Only Results: The table and graph in Figures 5 and 6 summarize the workload results running queries only. As expected, large performance and throughput gains were achieved with all three environments as compared to the baseline results. There are several key points to note in comparing these results: 1. Both the baseline and aggregates tests required additional memory to drive results close to or above 90% CPU utilization. These scenarios could have benefitted from even more memory and/or additional DASD arms to reduce the I/O demands of performing the complex joins. 2. Higher response times on the baseline test reflect the additional time required to perform the join, selection and grouping on the workload queries. The impact of stored summarization with aggregates, MQTs and BWA is evident in both the response time and overall capacity to drive higher results. 3. With the 96GB of memory, the aggregates query-only number could have been driven slightly higher. However, 250 concurrent users was the maximum that could be supported during the query + data load test. Thus, 250 concurrent users were reported for the query-only results to maintain consistency of comparisons across both tests. 4. The MQT results represent a fully utilized system that maximizes CPU, I/O and workload performance in the query + data load test. 5. The BWA appliance could have supported many more users and achieved much higher results. The performance was gated by the capacity of the IBM i server used for the SAP NetWeaver central installation. In addition, more blades can be configured to the BWA to further increase workload capacity http://w3.ibm.com/support/techdocs Page 9 of 15

Test Concurrent users Memory (GB) SAP qns/hr SAP avg rsp time CPU on i box Baseline 85 96 a 28558 1.335 88.9 Aggregates 250 96 a 82425.742 86.0 MQTs 250 64 85207.566 90.1 BWA 375 64 129590 b.414 89.2 (a) Additional memory required to achieve response time and CPU utilization goals in range with MQTs and BWA (b) Results gated by driving capacity of the central configuration server Fi gure 5. Query Only Overall Results Query Only Average Response Times (in seconds) Aggregates MQTs BWA Baseline 3 9.8 8.7 2.5 2 1.5 1 0.5 0 Q1 Q2 Q3 Q4 Q5 Q6 Q7 Q8 Q9 Q10 Q11 Figure 6. Query only average response times per query. NOTE: Baseline response times of 9.8 (Q2) and 8.7 (Q7). Graph scaled to 3 seconds to better differentiate faster response times across most of the queries. Query Plus Delta Data Load Results: The delta data load phase was to run at the same level of concurrent SAP users as the query only workload. The test was lengthened to run for at least one hour. Once all users were running concurrently, the high use interval phase of the test began. Delta loads were started at the 0 minute mark, at the 20 minute mark, and at the 40 minute mark of this interval. The goal was to confirm a http://w3.ibm.com/support/techdocs Page 10 of 15

minimal impact on average response time and overall SAP qns/hr while completing each delta load in 20 minutes or less. Observations to note on the comparisons in Figures 7 and 8: 1. Despite the additional memory for the SAP aggregates, overall performance was still significantly impacted by I/O constraints. Additional disk arms and/or memory would have benefitted this workload but was not available with the test hardware. Excessive delta load times reflect the I/O burden of performing the complex end user queries concurrently with refreshes of the SAP aggregates, which are required during the delta loads. It was not possible to push CPU above 90% due to the I/O constraints. 2. MQTs and BWA were comparable in the ability to drive CPU and maintain response time while staying within the 20 minute delta load window time at the same memory footprint. 3. BWA s increase in overall throughput is a reflection of the query and load activity offloaded to the appliance, thus freeing up the central installation IBM i server to achieve increased capacity. 4. The BWA could have supported much more work, as shown by Figure 9. CPU load on the BWA did not exceed 25% on any blade during the delta load test and work was evenly distributed across the four blades. Test Concurrent Memory SAP qns/hr SAP avg rsp CPU on i Delta load 1 Delta load 2 Delta load 3 users (GB) time box time time time Aggregates 250 96 73054 2.086 87.0% 36:28 41:13 32:09 MQTs 250 64 84070.696 96.4% 18:17 10:58 11:57 BWA 375 64 126094 a.693 92.9% 13:37 9:48 8:52 (a) Results gated by driving capacity of the central configuration server Figure 7. Query + Data Load Overall Results http://w3.ibm.com/support/techdocs Page 11 of 15

Query + Data Load Average Response Times (in seconds) Aggregates MQTs BWA 7 6 5 4 3 2 1 0 Q1 Q2 Q3 Q4 Q5 Q6 Q7 Q8 Q9 Q10 Q11 Figure 8. Query + Data Load average response times per query The TREX workload administration toolset provides a set of monitoring tools for the BWA. In Figure 9, the chart graphs both CPU and memory used for each of the 4 blades in the BWA implementation. The timeframe from 20:15-21:45 represents the system utilization during the query + data load workload. CPU averaged around 20% on each blade. Both CPU and memory were evenly utilized across the blades with no tuning required to achieve this balance. http://w3.ibm.com/support/techdocs Page 12 of 15

Figure 9. TREX Administration Services summary of CPU and Memory on BWA Summary Each of the workload configurations demonstrated their potential performance benefits over a basic SAP NetWeaver Business Warehouse installation. The pros and cons in Figure 10 can be weighed to determine the best fit for a given environment: Test Pros Cons Aggregates Capability shipped without charge in SAP NetWeaver NetWeaver controlled aggregate maintenance Significant performance gains over baseline maintenance overhead Some analysis required to define an optimal set of aggregates that balances query performance vs. aggregate Least potential for performance improvement of the three tested implementations http://w3.ibm.com/support/techdocs Page 13 of 15

MQTs Capability shipped without charge in DB2 for i Performance gains better than SAP aggregates with smaller memory footprint BWA Highest performance gains of the three configurations with much more capacity available to achieve higher results (could push the 4 blades higher or add additional blades) BWA indexes the easiest to define no analysis required BWA indexes automatically maintained Figure 10. Summary of Configuration Pros and Cons Analysis and SQL experience required to define an optimal set of MQTs (service offerings available) System maintained MQTs not yet available on DB2 for i (user must define and implement maintenance strategy again, service offerings available) Appliance cost Additional knowledge required to install and maintain (service offerings available) A generalized view of the cost /skills / performance comparison is shown in Figure 11. While each solution may require acquisition cost, additional DASD and/or memory costs for MQTs and aggregates are incurred only if the system did not already have sufficient capacity. All software needed to implement these solutions is bundled with SAP NetWeaver in the case of aggregates and bundled with DB2 for i in the case of MQTs. The BWA has a higher acquisition cost due to the separate appliance. However, once configured and implemented, BWA has the least amount of ongoing skill to maintain and the highest potential for benefit. The aggregate and MQT solutions both require analysis and implementation skills on a periodic basis to monitor and maintain the highest levels of performance gains. The skill level is higher for MQTs due to the need for a maintenance strategy for keeping the table(s) up to date. Appliance Memory and DASD DASD Acquistion cost Skill to implement Benefit Aggregate MQT BWA Figure 11. Approximated Comparison of Cost / Skills / Performance Benefit The techniques described here can also be applied to your data sets to help your business drive even more performance out of your SAP Business Warehouse solution running on IBM i. Any one or a combination of performance optimizing enhancements may be the best solution based on price, performance needs, and skills. IBM offers expert consulting to help you analyze your data, and apply the techniques that will best leverage the capabilities of your IBM i platform. If you would like help http://w3.ibm.com/support/techdocs Page 14 of 15

applying these techniques to your Business Warehouse solution, contact Frank Kriss (kriss@us.ibm.com) with IBM Systems Lab Services and Training to discuss an SAP on IBM i BW Performance Assessment. For more information about the tests represented in this document, contact Susan Bestgen (sbestgen@us.ibm.com). Special Notices Performance is based on measurements using a benchmark similar to a standard SAP benchmark in a controlled environment. The actual throughput or performance that any user will experience will vary depending upon considerations such as network configuration, size of database, the I/O configuration, the storage configuration, and the actual workload. Therefore, no assurance can be given that an individual will achieve throughput or performance improvements equivalent to the ratios stated here. http://w3.ibm.com/support/techdocs Page 15 of 15