SAP SD Benchmark with DB2 and Red Hat Enterprise Linux 5 on IBM System x3850 M2

Similar documents
Vertical Scaling of Oracle 10g Performance on Red Hat

Optimizing Fusion iomemory on Red Hat Enterprise Linux 6 for Database Performance Acceleration. Sanjay Rao, Principal Software Engineer

HP ProLiant DL580 G5. HP ProLiant BL680c G5. IBM p570 POWER6. Fujitsu Siemens PRIMERGY RX600 S4. Egenera BladeFrame PB400003R.

Newest generation of HP ProLiant DL380 takes #1 position overall on Oracle E-Business Suite Small Model Benchmark

InfoBrief. Dell 2-Node Cluster Achieves Unprecedented Result with Three-tier SAP SD Parallel Standard Application Benchmark on Linux

HP on CUTTING EDGE with ProLiant BL460c G6 server blade

HP SAS benchmark performance tests

Key results at a glance:

Dialog (interactive) data input. Reporting. Printing processing

SPECweb2005 Benchmark using Red Hat Enterprise Linux 5.2 on a HP Proliant DL580 G5 (Result > 50,000)

Introduction...2. Executive summary...2. Test results...3 IOPs...3 Service demand...3 Throughput...4 Scalability...5

Sizing for Guided Procedures, SAP NetWeaver 7.0

Comparing Software versus Hardware RAID Performance

RHEL OpenStack Platform on Red Hat Storage

QuickSpecs. Models SATA RAID Controller HP 6-Port SATA RAID Controller B21. HP 6-Port SATA RAID Controller. Overview.

Lawson M3 7.1 Large User Scaling on System i

Target Environments The Smart Array 6i Controller offers superior investment protection to the following environments: Non-RAID

Performance and Energy Efficiency of the 14 th Generation Dell PowerEdge Servers

Sun Fire X4170 M2 Server Frequently Asked Questions

An Introduction to GPFS

IBM System x servers. Innovation comes standard

Assessing performance in HP LeftHand SANs

E-BUSINESS SUITE APPLICATIONS R12 (R12.1.3) iprocurement (OLTP) BENCHMARK - USING ORACLE DATABASE 11g ON FUJITSU S M10-4S SERVER RUNNING SOLARIS 11

NAV 2009 Scalability. Locking Management Solution for Dynamics NAV SQL Server Option. Stress Test Results White Paper

High-Performance Lustre with Maximum Data Assurance

Table of Contents. HP A7173A PCI-X Dual Channel Ultra320 SCSI Host Bus Adapter. Performance Paper for HP PA-RISC Servers

QuickSpecs. Models. HP Smart Array 642 Controller. Overview. Retired

Overview of the SPARC Enterprise Servers

An Oracle Technical White Paper October Sizing Guide for Single Click Configurations of Oracle s MySQL on Sun Fire x86 Servers

QuickSpecs. What's New HP 120GB 1.5Gb/s SATA 5400 rpm SFF HDD. HP Serial-ATA (SATA) Hard Drive Option Kits. Overview

IBM System p5 550 and 550Q Express servers

IBM System x servers. Innovation comes standard

How Smarter Systems Deliver Smarter Economics and Optimized Business Continuity

DELL MICROSOFT REFERENCE CONFIGURATIONS PHASE II 7 TERABYTE DATA WAREHOUSE

E-BUSINESS SUITE APPLICATIONS R12 (RUP 4) LARGE/EXTRA-LARGE PAYROLL (BATCH) BENCHMARK - USING ORACLE10g ON A HEWLETT-PACKARD PROLIANT DL380 G6 SERVER

Evaluation Report: HP StoreFabric SN1000E 16Gb Fibre Channel HBA

VMware VMmark V1.1 Results

UNITED STATES. Performance Report. IBM Netfinity HUVLRQÃ -DQXDU\Ã

vrealize Business System Requirements Guide

QuickSpecs. Models. Overview

Competitive Power Savings with VMware Consolidation on the Dell PowerEdge 2950

A Performance Characterization of Microsoft SQL Server 2005 Virtual Machines on Dell PowerEdge Servers Running VMware ESX Server 3.

Evaluation Report: Improving SQL Server Database Performance with Dot Hill AssuredSAN 4824 Flash Upgrades

IBM s Data Warehouse Appliance Offerings

Infor M3 on IBM POWER7+ and using Solid State Drives

NotesBench Disclosure Report for IBM Netfinity 3500 (RAID-5) with Lotus Domino Server 4.6a for Windows NT 4.0

Performance Report PRIMERGY TX120 S2

IBM System Storage DS8870 Release R7.3 Performance Update

Contents Overview of the Gateway Performance and Sizing Guide... 5 Primavera Gateway System Architecture... 7 Performance Considerations...

AMD: WebBench Virtualization Performance Study

LATEST INTEL TECHNOLOGIES POWER NEW PERFORMANCE LEVELS ON VMWARE VSAN

VMware VMmark V1.1 Results

An Oracle White Paper Released April 2008

VERITAS Storage Foundation 4.0 for Oracle

IBM System p5 510 and 510Q Express Servers

Comparison of Storage Protocol Performance ESX Server 3.5

An Oracle White Paper. Released April 2013

Models PDC/O5000 9i W2K Cluster Kit B24

p5 520 server Robust entry system designed for the on demand world Highlights

SUMMARY OF RESULTS BENCHMARK PROFILE. server.

Dell Microsoft Business Intelligence and Data Warehousing Reference Configuration Performance Results Phase III

HP ProLiant delivers #1 overall TPC-C price/performance result with the ML350 G6

VERITAS Storage Foundation 4.0 for Oracle

A Rational software Whitepaper 05/25/03. IBM Rational Rapid Developer Scalability and Performance Benchmark

A GPFS Primer October 2005

z/vm Large Memory Linux on System z

Retired. Models Smart Array 6402/128 Controller B21 Smart Array 6404/256 Controller B21

Veritas Storage Foundation and. Sun Solaris ZFS. A performance study based on commercial workloads. August 02, 2007

Aster Database Platform/OS Support Matrix, version 5.0.2

VMware VMmark V1.1 Results

VMware VMmark V1.1 Results

QuickSpecs. What's New. Models. Overview

Exchange Server 2007 Performance Comparison of the Dell PowerEdge 2950 and HP Proliant DL385 G2 Servers

QuickSpecs. Models 64-Bit/133-MHz Dual Channel Ultra320 SCSI host bus adapter B bit/133-MHz Dual Channel Ultra320 SCSI host bus adapter

KVM Virtualized I/O Performance

IBM Unica Campaign Family Version Publication Date: June 7, Recommended Software Environments and Minimum System Requirements

IBM p5 520 and way and 2-way 1.5 GHz processors offer new processor speed options

Geneva 10.0 System Requirements

Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS

Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS

Hyper-Threading Performance with Intel CPUs for Linux SAP Deployment on ProLiant Servers. Session #3798. Hein van den Heuvel

HPE Datacenter Care for SAP and SAP HANA Datacenter Care Addendum

EMC CLARiiON CX3 Series FCP

QuickSpecs. Models. HP Smart Array P400i Controller. Overview

Dell PowerEdge R910 SQL OLTP Virtualization Study Measuring Performance and Power Improvements of New Intel Xeon E7 Processors and Low-Voltage Memory

DocuShare 6.6 Customer Expectation Setting

Linux Network Tuning Guide for AMD EPYC Processor Based Servers

QuickSpecs. HPE Workload Aware Security for Linux. Overview

IBM System p5 570 POWER5+ processor and memory features offer new options

Dell Reference Configuration for Large Oracle Database Deployments on Dell EqualLogic Storage

Cost and Performance benefits of Dell Compellent Automated Tiered Storage for Oracle OLAP Workloads

Performance report for Microsoft Office Communications Server Consolidated Configuration on HP BladeSystem

Benefits of Automatic Data Tiering in OLTP Database Environments with Dell EqualLogic Hybrid Arrays

INFOBrief. Dell - Your next SAP NetWeaver Portal-based Solution? Results

Symantec NetBackup PureDisk Compatibility Matrix Created August 26, 2010

Large Page Performance ESX Server 3.5 and ESX Server 3i v3.5

SPC BENCHMARK 1 FULL DISCLOSURE REPORT LENOVO THINKSYSTEM DE6000H SPC-1 V3.8 SUBMISSION IDENTIFIER: A32008

Models Smart Array 6402/128 Controller B21 Smart Array 6404/256 Controller B21

E-BUSINESS SUITE APPLICATIONS R12 (R12.1.3) HR (OLTP) BENCHMARK - USING ORACLE DATABASE 11g ON FUJITSU S M10-4S SERVER RUNNING SOLARIS 11

NEC Express5800 A2040b 22TB Data Warehouse Fast Track. Reference Architecture with SW mirrored HGST FlashMAX III

Transcription:

SAP SD Benchmark using DB2 and Red Hat Enterprise Linux 5 on IBM System x3850 M2 Version 1.0 November 2008

SAP SD Benchmark with DB2 and Red Hat Enterprise Linux 5 on IBM System x3850 M2 1801 Varsity Drive Raleigh NC 27606-2072 USA Phone: +1 919 754 3700 Phone: 888 733 4281 Fax: +1 919 754 3701 PO Box 13588 Research Triangle Park NC 27709 USA The following terms used in this publication are trademarks of other companies as follows: IBM System x, IBM, ServeRAID, DB2 and all other IBM products and services mentioned herein are trademarks of International Business Machines Corporation in the United States and other countries Linux is a registered trademark of Linus Torvalds Red Hat, Red Hat Enterprise Linux and the Red Hat "Shadowman" logo are registered trademarks of Red Hat, Inc. in the United States and other countries SAP software, SAP NetWeaver platform, SAP R/3 Enterprise sofware, mysap and all other SAP products and services mentioned herein are registered trademarks of SAP AG in Germany and other countries Intel and Xeon are registered trademarks of Intel Corporation All other trademarks referenced herein are the property of their respective owners. 2008 by Red Hat, Inc. This material may be distributed only subject to the terms and conditions set forth in the Open Publication License, V1.0 or later (the latest version is presently available at http://www.opencontent.org/openpub/). The information contained herein is subject to change without notice. Red Hat, Inc. and IBM Corporation shall not be liable for technical or editorial errors or omissions contained herein. Distribution of modified versions of this document is prohibited without the explicit permission of Red Hat Inc. and Corporation. Distribution of this work or derivative of this work in any standard (paper) book form for commercial purposes is prohibited unless prior permission is obtained from Red Hat Inc. and IBM Corporation. The GPG fingerprint of the security@redhat.com key is: CA 20 86 86 2B D6 9D FC 65 F6 EC C4 21 91 80 CD DB 42 A6 0E www.redhat.com 2 www.ibm.com

Table of Contents 1. Executive Summary... 5 2. SAP SD Benchmark Overview... 9 2.1 Benchmark Description & Methodology... 9 2.2 Key Performance Metrics... 11 2.2.1 Benchmark Users and Average Dialog Response Time... 11 2.2.2 Throughput Measurement in SAPS... 11 2.3 What must be published... 11 3. System Configuration... 13 3.1 Hardware Configuration... 14 3.2 Software Configuration... 15 4. Configuration Steps & Tuning... 16 4.1 Red Hat Enterprise Linux Tuning... 16 4.2 DB2 Database Tuning... 16 4.3 SAP Tuning... 16 5. Benchmark Performance Results... 17 6. References... 18 APPENDIX A: Certification by SAP AG... 19 APPENDIX B: Benchmarks Referenced in Figure 2... 20 www.ibm.com 3 www.redhat.com

www.redhat.com 4 www.ibm.com

1. Executive Summary The SAP Sales and Distribution (SD) Standard Application Benchmark has become a de facto standard for evaluating the performance of many ERP solutions. Red Hat Enterprise Linux and DB2 running on IBM System x achieved the best SAP SD results to date on a 24-core x86-64 chip based server. Figure 1 shows the distribution of 2-Tier SAP SD Benchmark ratings for x86-64 chip based servers certified by SAP in the last 2 years (2007-2008). It is a scatter plot of the 2-Tier SAP SD Users achieved versus the number of CPU cores in a server. For more details, refer to the SAP benchmark results site: http://www.sap.com/solutions/benchmark/sd2tier.epx Figure 1: Distribution of 2-Tier SAP SD Performance by # Cores per Server www.ibm.com 5 www.redhat.com

Figure 2 shows the best 2-Tier SAP SD Benchmark rating for x86-64 chip based servers certified by SAP in the last 2 years (2007-2008). It shows the highest 2-Tier SAP SD Users achieved versus the number of CPU cores in a server. For more details, refer to Appendix B and the SAP benchmark results site: http://www.sap.com/solutions/benchmark/sd2tier.epx Figure 2: Best 2-Tier SAP SD Performance by # Cores per Server This result of 5,156 SAP SD (2-tier) benchmark users was the best result to date for a 24- core, x86-64 chip based sever. It was achieved by using SAP ECC 6.0 / NetWeaver 7.0 and IBM's DB2 9.5 on Red Hat Enterprise Linux (RHEL) 5.2 running on IBM System x3850 M2 (4 x Intel Xeon X7460, 2.66 GHz, 64GB) and IBM TotalStorage DS4500 with 5 drawers of 14x36GB 15K RPM disks. The 5,156 SAP SD benchmark users were achieved with 1.97 seconds average dialog response time, 25,850 SAPS, measured throughput of 1,551,000 dialog steps per hour (or 517,000 fully processed line items per hour), and an average CPU utilization of 98 percent. www.redhat.com 6 www.ibm.com

Figure 3: Competitive Comparison of 24-Core Servers Again, referring to the SAP benchmark results site: http://www.sap.com/solutions/benchmark/sd2tier.epx and comparing this results to other leading SAP certified Unix / Linux / Windows results on industry standard Intel Xeon-based (model X7460, 4x6=24 core) servers, this result surpasses: 1. 4,600 SAP SD (2-tier) Benchmark users achieved by Sun, with Solaris 10 and MaxDB 7.6 running with SAP ECC 6.0 running on the Sun Fire X4450 (4 Processors / 24 Cores / 24 Threads, Intel Xeon Processor MP X7460, 2.66 GHz, 64 KB L1 cache per core and 3 MB L2 cache per 2 cores, 16 MB L3 cache per processor, and 80 GB RAM). SAP certification number 2008051. 2. 5,135 SAP SD (2-tier) Benchmark users achieved by Fujitsu Siemens Corp., with Windows Server 2003 Enterprise Edition and SQL Server 6.0 (2005) with SAP ECC 6.0 running on the PRIMERGY Model RX600 S4 (4 Processors / 24 Cores / 24 Threads, Intel Xeon Processor MP X7460, 2.66 Ghz, 64 KB L1 cache per core and 3 MB L2 cache per 2 cores, 16 MB L3 cache per processor, and 64GB of RAM). SAP www.ibm.com 7 www.redhat.com

certification number 2008060. 3. 5,155 SAP SD (2-tier) Benchmark users achieved by HP, with Windows Server 2003 Enterprise Edition and SQL Server 6.0 (2005) with SAP ECC 6.0 running on the ProLiant DL580 G5 (4 Processors / 24 Cores / 24 Threads, Intel Xeon Processor MP X7460, 2.66 Ghz, 64 KB L1 cache per core and 3 MB L2 cache per 2 cores, 16 MB L3 cache per processor, and 64GB of RAM). SAP certification number 2008050. More details of these and every other SAP benchmark can be found at http://www.sap.com/benchmark. This benchmark result is a demonstration of the close and continued cooperation between IBM Corp. and Red Hat Inc. to showcase the superior combined performance of Red Hat Enterprise Linux (RHEL) and DB2 running on IBM s Intel Xeon-based System x servers. www.redhat.com 8 www.ibm.com

2. SAP SD Benchmark Overview SAP Standard Application Benchmarks help customers and partners find the appropriate hardware configuration for their IT solutions. SAP, working in concert with its hardware partners, has developed the SAP Standard Application Benchmarks to test the hardware and database performance of SAP applications and components. The benchmarking procedure is standardized and well defined. It is monitored by the SAP Benchmark Council made up of representatives of SAP and technology partners involved in benchmarking. Originally introduced to strengthen quality assurance, the SAP Standard Application Benchmarks can also be used to test and verify scalability, concurrency, and multi-user behavior of system software components, RDBMS, and business applications. Customers can benefit from the benchmark results in various ways. For example, benchmark results illuminate the scalability and manageability of large installations. The benchmark results: Provide basic information to configure and size SAP Business Suite Allow users to compare different platforms Enable Proof-of-concepts scenarios Provide an outlook for future performance levels (new platforms, new servers, and so on) 2.1 Benchmark Description & Methodology The SAP SD benchmark can be configured as either two-tier or three-tier. Within the two-tier configuration, the database server and the SAP NetWeaver application server run on the same physical machine. In the three-tier configuration, the presentation tier, application tier, and database tier each run on separate physical servers. This benchmark was run using a two-tier configuration. In the two-tier configuration architecture the database and application layer reside on a single system the simulation is driven by the presentation server (aka benchmark driver). A SAP Standard Application Benchmark consists of script files that simulate the most typical transactions and workflow of a user scenario. It has a predefined SAP client database that contains sample company data against which the benchmark is run. The benchmark transactions of each component usually reflect the data throughput of an installation (for example orders, number of goods movements, etc.). Note: Each benchmark user has his or her own master data, such as material, vendor, or customer master data to avoid data-locking situations. The Sales and Distribution benchmark is one of the most CPU intensive benchmarks available and has become a de-facto standard for SAP's platform partners and in the ERP (Enterprise Resource Planning) environment. During the benchmark a defined set of business www.ibm.com 9 www.redhat.com

transactions are run through as shown in the table below. The SD benchmark driver provided by SAP simulates end users driving business transactions in dialog steps. The Sales and Distribution (SD) Benchmark covers a sell-from-stock scenario, which includes the creation of a customer order with five line items and the corresponding delivery with subsequent goods movement and invoicing. It consists of the following SAP transactions: Create an order with five line items. (VA01) Create a delivery for this order. (VL01N) Display the customer order. (VA03) Change the delivery (VL02N) and post goods issue. List 40 orders for one sold-to party (VA05) Create an invoice. (VF01) Table 1: Dialog steps of the standard Sales & Distribution (SD) benchmark Each of the simulated users repeats this series of transactions from the start to the end of a benchmark run. During the so-called ramp-up phase the number of concurrently working users is increased until the expected limit (e.g. 5000) is reached. When all users are active the test interval starts. This performance level must be maintained for at least 15 minutes (benchmark rule). After at least 5 minutes of the high load phase one or more database checkpoints must be forced (i.e. all log file data is flushed back to the database) to stress the I/O subsystem in a realistic way (benchmark rule). At the end of the high load phase users are gradually taken off the system until none is active. When the test concludes all relevant data (some are gathered with a SAP developed Operating System monitor) are then transferred to the presentation server for further evaluation. Figure 4: Benchmark Run www.redhat.com 10 www.ibm.com

The SAP Standard Application Benchmarks measure all performance relevant metrics such as database request times, wait times, CPU utilization, average dialog response times by a given number of benchmark users (with a fixed think time of 10 seconds between each dialog step) and the achieved throughput. 2.2 Key Performance Metrics 2.2.1 Benchmark Users and Average Dialog Response Time A benchmark can only be certified if the average dialog response time is less than two seconds (this is the number which some time ago was accepted as a tolerable system reaction time). More and more benchmark users are switched to the system under test until the response time is outside the granted time frame. Only SAP certified and audited benchmarks may be published by partners to ensure results that can be fairly compared with each other. A typical result would read like 5000 SD benchmark users with an average dialog response time of 1.98 seconds. 2.2.2 Throughput Measurement in SAPS SAP has defined a unit for measuring throughput in a mysap Business Suite environment: SAPS (SAP Application Benchmark Performance Standard). 100 SAPS are defined as 2,000 fully processed business line items per hour in the standard SD application benchmark. This throughput is achieved by processing 6,000 dialog steps (screen changes) and 2,000 postings per hour in the SD benchmark or processing 2,400 SAP transactions. In the SD standard benchmark fully business processed means the full business workflow of an order line item (creating the order, creating a delivery note for this order, displaying the order, changing the delivery, posting a goods issue, listing orders and creating an invoice) has completed. 2.3 What must be published The following information must be part of a benchmark press release: Type of client/server configuration (2-tier or 3-tier) The mysap Business Suite component RDBMS type and version number Operating System type and version number Tested standard mysap Business Suite components (FI, PP, SD,... or a combination www.ibm.com 11 www.redhat.com

of these) Number of tested benchmark users Average dialog response time in n.nn sec Achieved throughput in dialog steps / hour A detailed description of hardware configuration (type, size of main memory, average CPU utilization and func-tion for all servers involved in the benchmark) Confirmation that the benchmark is certified by SAP (e.g. "This benchmark fully complies with SAP's issued benchmark regulations and has been audited and certified by SAP.") Reference where readers can get more information (e.g. "Details regarding this benchmark are available upon request from hardware partner or SAP AG."). www.redhat.com 12 www.ibm.com

3. System Configuration Figure 5: Hardware Configuration www.ibm.com 13 www.redhat.com

3.1 Hardware Configuration Hardware Configuration: IBM System x3850 M2, equipment with: o 4x Intel Xeon X7460 six core processors running at 2.66GHz, with 3MB L2 cache per core, and 16MB L3 cache per processor. 24 cores in total. o 64GB DDR2 RAM o 1 x 146GB internal SAS disk o 1 x IBM 4GB Dual Port FibreChannel PCI-E HBA IBM Total Storage DS4500 controller, 1742-900, with o 2 I/O controllers with 1GB of cache memory each o 4 EXP-710 enclosures, with 14 x 36GB, 10K RPM drives, for a total of 56 disks o Five volumes were created from the 56 disks 1 RAID5 volume striped across all 14 disks in one enclosure for the SAP binaries, backup, and other indicadental files 1 RAID0 volume striped across all 14 disks of another enclosure for the database log 1 RAID5 volume striped across all 14 disks of the third enclosure for the main portion of the DB2 database 1 RAID0 volume striped across all 14 disks and of the fourth enclosure to hold the multiplexed VBDATA tables and 1 RAID0 volume stripped across all 14 disks of the fifth enclosure hold the I/O sensitive database tables. www.redhat.com 14 www.ibm.com

3.2 Software Configuration Software Configuration: Red Hat Enterprise Linux 5.2 SAP Enterprise Core Component (ECC) 6.0, with the NetWeaver 7.0 kernel DB2 v9.5 Enterprise Server, Fix Pack 2 Figure 6: Software Configuration www.ibm.com 15 www.redhat.com

4. Configuration Steps & Tuning 4.1 Red Hat Enterprise Linux Tuning Red Hat Enterprise Linux can support hardware large page enablement through the use of kernel parameter settings. For this benchmark, 25,000 2MB pages were configured to be used by SAP and DB2, providing a slight increase in performance over using the default 4K page size. 4.2 DB2 Database Tuning The database was tuned to provide sufficient buffer cache to allow the database to perform with 99%+ database buffer access. The database log device was created on a large striped volume on the DS4500 and was created using raw disk volume on the operating system level. Another volume was used to hold the hot tables that were accessed extensively in the operation of the benchmark. This allowed for custom tuning of the table parameters and to separate the I/O of these tables from the rest of the database activity. The VBDATA tables (VBDATA, VBMOD and VBHDR) were also separated out from their default tablespace and put on another raw device that allowed for less contention with the core database activity and to allow for a custom buffer pool to be used to hold the contents of those tables for faster transaction processing. 4.3 SAP Tuning After the initial installation of the SAP environment, which included the installation of the SAP central instance (CI) and 15 dialog instances, the environment was tuned to set up the appropriately sized buffers and work processes. The final configuration split the functions into 12 dialog instance and four update instances. Every third dialog instance was configured to use the same update instance. This allowed for better resource allocation for the SD benchmark workload. Each instance was configured to use the processing resources of a single processor and the six cores of that processor. This was accomplished by using operating system tools to identify the processes of the instance and to restrict the processes. The pinning of the instance processes to a processor was accomplished through the use of the taskset utility. www.redhat.com 16 www.ibm.com

5. Benchmark Performance Results The benchmark result is shown in Table 2. The official certification of this benchmark is shown in Appendix A. Benchmark 2-Tier SAP Sales & Distribution (SD) SAP Components SAP ECC 6.0 / NetWeaver 7.0 Database DB2 9.5 Operating System Red Hat Enterprise Linux 5.2 Number of simulated users 5156 (Sales & Distribution) Number of dialog steps / hour 1,551,000 Fully Processed Order Line items/hour 517,000 Average dialog response time 1.97 seconds Average DB request time Dialog = 0.030 sec. / Update = 0.030 sec. Average CPU utilization of central server 98% Table 2: Results of 2-Tier SAP Sales & Distribution (SD) Benchmark www.ibm.com 17 www.redhat.com

6. References 1. SAP Sales and Distribution (SD) Benchmark http://www.sap.com/solutions/benchmark/sd.epx 2. SAP SD Standard Application Benchmark - Published Results, Two-Tier Internet Configuration http://www.sap.com/solutions/benchmark/sd2tier.epx www.redhat.com 18 www.ibm.com

APPENDIX A: Certification by SAP AG www.ibm.com 19 www.redhat.com

APPENDIX B: Benchmarks Referenced in Figure 2 1. 2,752 SAP SD (2-tier) Benchmark users achieved by HP, with Windows Server 2003 Enterprise Edition and SQL Server 2005 with SAP ECC 6.0 running on HP ProLiant DL385 G5p (2 Processors / 8 Cores / 8 Threads, Quad-Core AMD Opteron Processor 2384, 2.7 Ghz, 128 KB L1 cache and 512 KB L2 cache per core, 6 MB L3 cache per processor, and 32 GB RAM). SAP certification number 2008065. 2. 3,801 SAP SD (2-tier) Benchmark users achieved by HP, with Windows Server 2003 Enterprise Edition and SQL Server 2005 with SAP ECC 6.0 running on HP ProLiant DL585 G5 (4 Processors / 16 Cores / 16 Threads, Quad-Core AMD Opteron Processor 8360 SE, 2.5 Ghz, 128 KB L1 cache and 512 KB L2 cache per core, 2 MB L3 cache per processor, and 64 GB RAM). SAP certification number 2008041. 3. 5,156 SAP SD (2-tier) Benchmark users achieved by IBM, with Red Hat Enterprise Linux 5.2 and DB2 9.5 with SAP ECC 6.0 running on IBM System x3850 M2 (4 processors / 24 cores / 24 threads, Intel Xeon Processor MP X7460, 2.66 GHz, 64 KB L1 cache per core and 3 MB L2 cache per 2 cores, 16 MB L3 cache per processor, and 64 GB RAM). SAP certification number 2008066. 4. 7010 SAP SD (2-tier) Benchmark users achieved by HP, with SuSE Linux Enterprise Server 10 and Oracle 10g with SAP ECC 6.0 running on HP ProLiant DL785 G5 (8 Processors / 32 Cores / 32 Threads, Quad-Core AMD Opteron Processor 8384, 2.7 Ghz, 128 KB L1 cache and 512 KB L2 cache per core, 6 MB L3 cache per processor, and 132 GB RAM). SAP certification number 2008064. 5. 9200 SAP SD (2-tier) Benchmark users achieved by IBM, with Windows Server 2003 Datacenter Edition and DB2 9.5 with SAP ECC 6.0 running on IBM System x3950 M2 (8 Processors / 48 Cores / 48 Threads, Intel Xeon Processor MP X7460, 2.66 Ghz, 64 KB L1 cache per core and 3 MB L2 cache per 2 cores, 16 MB L3 cache per processor, 132 GB RAM). SAP certification number 2008046. 6. 10600 SAP SD (2-tier) Benchmark users achieved by IBM, with Windows Server 2003 Datacenter Edition and DB2 9.5 with SAP ECC 6.0 running on IBM System x3950 M2 (16 Processors / 64 Cores / 64 Threads, Quad-Core Intel Xeon Processor X7350, 2.93 Ghz, 64 KB L1 cache per core and 4 MB L2 cache per 2 cores, and 264 GB RAM). SAP certification number 2008030. www.redhat.com 20 www.ibm.com