Reduce Costs & Increase Oracle Database OLTP Workload Service Levels:

Similar documents
Consolidating OLTP Workloads on Dell PowerEdge R th generation Servers

Four-Socket Server Consolidation Using SQL Server 2008

Consolidating Oracle Databases on the Dell PowerEdge R820 Server Using Oracle VM

Consolidating DSS Workloads on Dell PowerEdge 11G Servers Using Oracle 11g Database Replay

Upgrade to Microsoft SQL Server 2016 with Dell EMC Infrastructure

TPC-E testing of Microsoft SQL Server 2016 on Dell EMC PowerEdge R830 Server and Dell EMC SC9000 Storage

Dell Reference Configuration for Large Oracle Database Deployments on Dell EqualLogic Storage

Accelerating Microsoft SQL Server 2016 Performance With Dell EMC PowerEdge R740

DELL Reference Configuration Microsoft SQL Server 2008 Fast Track Data Warehouse

Exchange Server 2007 Performance Comparison of the Dell PowerEdge 2950 and HP Proliant DL385 G2 Servers

Dell PowerEdge R920 System Powers High Performing SQL Server Databases and Consolidates Databases

Impact of Dell FlexMem Bridge on Microsoft SQL Server Database Performance

Best Practices for Deploying a Mixed 1Gb/10Gb Ethernet SAN using Dell EqualLogic Storage Arrays

Cost and Performance benefits of Dell Compellent Automated Tiered Storage for Oracle OLAP Workloads

Microsoft SQL Server in a VMware Environment on Dell PowerEdge R810 Servers and Dell EqualLogic Storage

A Comparative Study of Microsoft Exchange 2010 on Dell PowerEdge R720xd with Exchange 2007 on Dell PowerEdge R510

A Performance Characterization of Microsoft SQL Server 2005 Virtual Machines on Dell PowerEdge Servers Running VMware ESX Server 3.

Database Solutions Engineering. Best Practices for Deploying SSDs in an Oracle OLTP Environment using Dell TM EqualLogic TM PS Series

vstart 50 VMware vsphere Solution Specification

Dell PowerEdge R720xd with PERC H710P: A Balanced Configuration for Microsoft Exchange 2010 Solutions

Performance Characterization of the Dell Flexible Computing On-Demand Desktop Streaming Solution

Consolidating Microsoft SQL Server databases on PowerEdge R930 server

Competitive Power Savings with VMware Consolidation on the Dell PowerEdge 2950

Dell PowerEdge R910 SQL OLTP Virtualization Study Measuring Performance and Power Improvements of New Intel Xeon E7 Processors and Low-Voltage Memory

Benefits of Automatic Data Tiering in OLTP Database Environments with Dell EqualLogic Hybrid Arrays

Microsoft Exchange Server 2010 Performance on VMware vsphere 5

BENEFITS AND BEST PRACTICES FOR DEPLOYING SSDS IN AN OLTP ENVIRONMENT USING DELL EQUALLOGIC PS SERIES

Performance Comparisons of Dell PowerEdge Servers with SQL Server 2000 Service Pack 4 Enterprise Product Group (EPG)

12/04/ Dell Inc. All Rights Reserved. 1

Performance Baseline for Deploying Microsoft SQL Server 2012 OLTP Database Applications Using EqualLogic PS Series Hybrid Storage Arrays

VMware VMmark V1.1 Results

IBM Emulex 16Gb Fibre Channel HBA Evaluation

High Availability and Disaster Recovery features in Microsoft Exchange Server 2007 SP1

Performance Scaling. When deciding how to implement a virtualized environment. with Dell PowerEdge 2950 Servers and VMware Virtual Infrastructure 3

Evaluation Report: HP StoreFabric SN1000E 16Gb Fibre Channel HBA

Microsoft SharePoint Server 2010 on Dell Systems

Dell Microsoft Reference Configuration Performance Results

Implementing SQL Server 2016 with Microsoft Storage Spaces Direct on Dell EMC PowerEdge R730xd

Storage Consolidation with the Dell PowerVault MD3000i iscsi Storage

Dell Microsoft Business Intelligence and Data Warehousing Reference Configuration Performance Results Phase III

DELL MICROSOFT REFERENCE CONFIGURATIONS PHASE II 7 TERABYTE DATA WAREHOUSE

Microsoft SQL Server 2012 Fast Track Reference Architecture Using PowerEdge R720 and Compellent SC8000

SanDisk Enterprise Solid State Drives: Providing Cost-Effective Performance Increases for Microsoft SQL Server Database Deployments

Reference Architectures for designing and deploying Microsoft SQL Server Databases in Active System800 Platform

Database Solutions Engineering. Best Practices for running Microsoft SQL Server and Microsoft Hyper-V on Dell PowerEdge Servers and Storage

10Gb iscsi Initiators

Virtualized SQL Server Performance and Scaling on Dell EMC XC Series Web-Scale Hyper-converged Appliances Powered by Nutanix Software

White Paper. Dell Reference Configuration

Best Practices for Deploying a Mixed 1Gb/10Gb Ethernet SAN using Dell Storage PS Series Arrays

Reference Architecture for Dell VIS Self-Service Creator and VMware vsphere 4

An Oracle White Paper September Oracle Utilities Meter Data Management Demonstrates Extreme Performance on Oracle Exadata/Exalogic

Sizing and Best Practices for Online Transaction Processing Applications with Oracle 11g R2 using Dell PS Series

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results

Reference Architecture

Exchange 2003 Deployment Considerations for Small and Medium Business

Accelerating Oracle OLTP 12c Database Performance with Dell Fluid Cache for SAN

VMware Infrastructure Update 1 for Dell PowerEdge Systems. Deployment Guide. support.dell.com

EMC Business Continuity for Microsoft Applications

Dell/EMC CX3 Series Oracle RAC 10g Reference Architecture Guide

Best Practices for Oracle 11g Backup and Recovery using Oracle Recovery Manager (RMAN) and Dell EqualLogic Snapshots

LEVERAGING EMC FAST CACHE WITH SYBASE OLTP APPLICATIONS

An Enterprise Database Performance Comparison of the Dell PowerEdge 2950 and the Sun Fire V490 UltraSPARC Server

Teradici APEX 2800 for VMware Horizon View

PowerVault MD3 SSD Cache Overview

Using Dell Repository Manager to Update Your Local Repository

Deploy a High-Performance Database Solution: Cisco UCS B420 M4 Blade Server with Fusion iomemory PX600 Using Oracle Database 12c

EMC CLARiiON CX3 Series FCP

VMware Infrastructure Update 1 for Dell PowerEdge Systems. Deployment Guide. support.dell.com

Dell EMC Ready Bundle for HPC Digital Manufacturing ANSYS Performance

Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS

Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS

Dell PowerEdge 11 th Generation Servers: R810, R910, and M910 Memory Guidance

SAN Design Best Practices for the Dell PowerEdge M1000e Blade Enclosure and EqualLogic PS Series Storage (1GbE) A Dell Technical Whitepaper

Microsoft SQL Server 2012 Fast Track Reference Configuration Using PowerEdge R720 and EqualLogic PS6110XV Arrays

HP ProLiant DL380 Gen8 and HP PCle LE Workload Accelerator 28TB/45TB Data Warehouse Fast Track Reference Architecture

Deployment of VMware Infrastructure 3 on Dell PowerEdge Blade Servers

Sizing and Best Practices for Deploying Oracle 11g Transaction Processing Databases on Dell EqualLogic Storage A Dell Technical Whitepaper

Aster Database Platform/OS Support Matrix, version 5.0.2

Oracle Database 11g Direct NFS Client Oracle Open World - November 2007

QLE10000 Series Adapter Provides Application Benefits Through I/O Caching

Best Practices for Mixed Speed Devices within a 10 Gb EqualLogic Storage Area Network Using PowerConnect 8100 Series Switches

Lenovo Database Configuration for Microsoft SQL Server TB

EMC Solutions for Enterprises. EMC Tiered Storage for Oracle. ILM Enabled by EMC Symmetrix V-Max. Reference Architecture. EMC Global Solutions

Assessing performance in HP LeftHand SANs

EMC Virtual Infrastructure for Microsoft Exchange 2007 Enabled by EMC CLARiiON CX4-120 and VMware vsphere 4.0 using iscsi

Quadport Capable Hardware for the M1000e Modular Chassis

VMware VAAI Integration. VMware vsphere 5.0 VAAI primitive integration and performance validation with Dell Compellent Storage Center 6.

Deploying and Managing Dell Big Data Clusters with StackIQ Cluster Manager

Dell Exchange 2007 Advisor and Representative Deployments

NAV 2009 Scalability. Locking Management Solution for Dynamics NAV SQL Server Option. Stress Test Results White Paper

Deploying Microsoft SQL Server 2005 Standard Edition with SP2 using the Dell PowerVault MD3000i iscsi Storage Array

White Paper. Dell Reference Configuration

Enhancing Oracle VM Business Continuity Using Dell Compellent Live Volume

Dell Solution for JD Edwards EnterpriseOne with Windows and SQL 2000 for 50 Users Utilizing Dell PowerEdge Servers And Dell Services

DELL TM PowerVault TM DL Backup-to-Disk Appliance

Microsoft Exchange Server 2010 workload optimization on the new IBM PureFlex System

Microsoft SharePoint Server 2010

InfoBrief. Dell 2-Node Cluster Achieves Unprecedented Result with Three-tier SAP SD Parallel Standard Application Benchmark on Linux

VMware Infrastructure 3.5 for Dell PowerEdge Systems. Deployment Guide. support.dell.com

iscsi STORAGE ARRAYS: AND DATABASE PERFORMANCE

Transcription:

Reduce Costs & Increase Oracle Database OLTP Workload Service Levels: PowerEdge 2950 Consolidation to PowerEdge 11th Generation A Dell Technical White Paper Dell Database Solutions Engineering Balamurugan B Ravi Ramappa Arpil 2010

EXECUTIVE SUMMARY The Dell enterprise portfolio is evolving to incorporate better performing, more energy efficient, and more highly available products. With the introduction of Dell s latest server product line, customers have an opportunity to improve their total cost of ownership by consolidating distributed legacy environments. This is the third white paper in a series that discusses server consolidation on Dell 11G product line. Earlier white papers that discuss the DSS/OLTP workload consolidation on Dell PowerEdge 11G 2 socket servers include: Consolidating DSS Workloads on Dell PowerEdge 11G Servers Using Oracle 11g Database Replay http://www.dell.com/downloads/global/solutions/database_11g_consolidate.pdf?c=ec&l=en&s=gen Consolidating OLTP Workloads on Dell PowerEdge 11G Servers http://www.dell.com/downloads/global/solutions/consolidating_oltp_workloads.pdf?c=us&cs=555&l= en&s=biz This white paper focuses on Online Transaction Processing (OLTP) workloads and consolidation on Dell PowerEdge 11G 4/2 socket servers. Dell strives to simplify IT infrastructure by providing methods to consolidate legacy production environments and reduce data center complexity. The tools and procedures described in this white paper can help administrators test, compare, validate, and implement the latest hardware and database solution bundles. Dell established these procedures and guidelines based on lab experiments and database workload simulations performed by the Dell Database Solutions Engineering team. Using the tools and procedures described in this document, customers may not only select the appropriate database solution hardware and software stack, but also optimize the solutions total cost of ownership according to the database workloads they choose to run. The intended audience of this white paper includes database administrators, IT managers, and system consultants. Page ii

THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED AS IS, WITHOUT EXPRESS OR IMPLIED WARRANTIES OF ANY KIND. 2010 Dell Inc. All rights reserved. Reproduction of this material in any manner whatsoever without the express written permission of Dell Inc. is strictly forbidden. For more information, contact Dell. Dell, the DELL logo, and the DELL badge, PowerConnect, and PowerVault are trademarks of Dell Inc. Symantec and the SYMANTEC logo are trademarks or registered trademarks of Symantec Corporation or its affiliates in the US and other countries. Microsoft, Windows, Windows Server, and Active Directory are either trademarks or registered trademarks of Microsoft Corporation in the United States and/or other countries. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. EMC is the registered trademark of EMC Corporation. Intel, and Xeon are either trademarks or registered trademarks of Intel Corporation. April 2010 Page iii

Contents EXECUTIVE SUMMARY... ii INTRODUCTION... 2 SYSTEM ARCHITHECTURE... 3 TEST CONFIGURATION... 4 TEST METHODOLOGY... 4 CONSOLIDATION FACTOR... 9 SUMMARY... 10 Tables Table 1: Test Configuration... 4 Figures Figure 1: System Architecture... 3 Figure 2: Base Configuration TPS Comparison Between Legacy Production and R810 Test Environment... 6 Figure 3: Base Configuration - AQRT Comparison Between Legacy Production and R810 Test Environment... 6 Figure 4: Base Configuration - CPU Time Comparison Between Legacy Production and R810 Test Environment... 7 Figure 5: At Legacy Saturated User Load - TPS Comparison Between Legacy Production and R810 Test Environment... 8 Figure 6: At Legacy Saturated User Load - AQRT Comparison Between Legacy Production and R810 Test Environment... 8 Figure 7: At Legacy Saturated User Load - CPU Time Comparison Between Legacy Production and R810 Test Environment... 9 Page 1

INTRODUCTION Server consolidation can be defined as maximizing the efficiency of computer server resources, thereby minimizing the associated power/cooling, rack footprint and licensing costs. It essentially solves a fundamental problem called server sprawl in which multiple, under utilized servers take up more space and consume more power resources than the workload requirement indicates. OLTP database systems typically service hundreds or thousands of concurrent users. An example of this type of system is a travel reservation system where a large number of customers and agents perform online travel reservations, or check available flights and schedules. The OLTP database transactions performed by these thousands of concurrent users get translated into tens of thousands of I/O requests to the backend storage subsystem depending on the nature of these OLTP transactions. The database host CPUs may only be efficiently used if the backend storage subsystem is configured with a sufficient number of disks to handle the large number of I/O requests. In the case of an Oracle database host CPUs exhibit large IOWAIT times instead of doing useful work. In this scenario, consolidating, upgrading, or migrating to a faster database server, or scaling the number of CPUs or memory does not help. The correct approach is to appropriately scale the backend disk subsystem to handle the I/O requests, and then move to the next stage of CPU and memory sizing as discussed later in this white paper. The objective of this white paper is to identify the consolidation factor for an Oracle database running OLTP workloads on legacy 9th Generation PowerEdge 2950 2U 2 socket to the new 11G PowerEdge R810 2U 4/2 socket servers. An enterprise database system may be running DSS, OLTP, or a mixed workload. The OLTP workloads typically send thousands of small I/O requests from the database servers to the backend storage subsystem. The large amount of I/O requests characteristic of the OLTP workload, means that the backend storage subsystem must have sufficient number of disks to handle the I/O requests coming from the hosts. Consider a two node Oracle RAC database hosted on two ninth generation (9G) PowerEdge 2950 dual socket, dual core, or quad core servers running Oracle 10g Release 2. Dell recently announced the availability of its eleventh-generation (11G) server product line equipped with the chipset that is designed to support Intel Xeon 7500 series 4/6/8 core processors, QuickPath Interconnect, DDR3 memory technology and PCI Express generation 2. A potential replacement for 9G 2U Dell servers is the 11G 2U Dell PowerEdge R810 server. The R810 supports four socket, eight core processors, two different types of energy efficient CPUs, and has a highly efficient overall architecture. A multi node Oracle RAC cluster on legacy systems with 2 socket dual core processors can be replaced by an Oracle RAC cluster consisting of fewer nodes of PowerEdge 11G with 4 socket eight core processors, and still process the OLTP workload faster with less power consumption and lower Oracle RAC licensing cost. The savings in RAC licensing may be used to efficiently configure and scale the backend storage system with enough I/O modules and disks to remove the I/O bottlenecks that are almost always an issue in an OLTP environment. Also, based on the results of this study, one may determine how many distributed standalone legacy environments running OLTP workloads can be consolidated on a single Oracle RAC solution running on Dell R810 servers. Page 2

SYSTEM ARCHITHECTURE Figure 1: System Architecture As shown in the above figure the legacy production environment consists of 2 node Oracle 10g R2 RAC running on 9G 2U 2-socket PowerEdge 2950 III servers and the test environment is an single node ORACLE 11g R2 RAC running on an 11G 2U 4/2 socket PowerEdge R810 server. Note: It should be noted that the intent of this paper is not to recommend converting a RAC cluster to a single node setup. The test setup was designed to compare the host CPU behavior for consolidating database workload at the same time ensuring that the number of cores are same in both the setups. To simulate Oracle RAC overhead in single node configuration (with R810 server), Oracle 11g R2 database was configured with Oracle 11g R2 grid infrastructure. Page 3

TEST CONFIGURATION Table 1 describes the complete software and hardware configuration that was used throughout testing on both the simulated legacy production environment and the 11G test environment. Table 1: Test Configuration Component Legacy Production Environment R810 Test Environment Systems Two PowerEdge 2950 III,2U servers One PowerEdge R810 2U 4/2 socket server Processors Two Intel Xeon X5460, 3.16 GHz quad core per node Cache: L2=2x4M per CPU One Intel Xeon X7560, 2.26 GHz eight core Cache: L2=8x256k L3=24M Memory 32 GB DDR2 per node (64 Gb total) 64 GB DDR3 Internal disks Two 73 GB 2.5 SAS per node Two 73 GB 2.5 SAS Network Two Broadcom NetXtreme II BCM5708 Gigabit Ethernet Four Broadcom NetXtreme II BCM5709 Gigabit Ethernet External storage Dell EMC CX4 480 with 146GB Fibre Channel disks Dell EMC CX4 480 with 146GB Fibre Channel disks HBA One QLE2462 per node One QLE2462 OS Enterprise Linux 4.6 Enterprise Linux 5.4 Oracle software Oracle 10g R2 10.2.0.4 File System: ASM Disk groups: DATABASE, DATA sga_target = 1600M pga_target = 800M Oracle 11g R2 11.2.0.1.0 File System: ASM Disk groups: DATABASE, DATA memory_target = 2400M Workload Quest Benchmark Factory TPCC workload Scale factor: 3000 User connections: 200 5000 Quest Benchmark Factory TPCC workload Scale factor: 3000 User connections: 200 5000 TEST METHODOLOGY Dell s Database Solution engineers used Quest Software Benchmark Factory TPCC, a load-generating utility that simulates OLTP users and transactions on a database for a given number of users. The TPCC workload provided by the Benchmark Factory schema simulates an order entry system consisting of multiple warehouses, with data populated in tables with rows according to the scale factor defined during table creation. The most commonly used metrics for an OLTP environment are transaction per second (TPS) and average query response time (AQRT). The AQRT of an OLTP database environment may be described as the average time it takes for an OLTP transaction to complete and deliver the results of the transaction to the end user. The AQRT is the most important factor when it comes to fulfilling end-user requirements, and it establishes the performance criteria for an OLTP database. The 2-seconds response time metric was chosen as the basis for our Service Level Agreement (SLA) which was maintained throughout the testing Our initial goal was to stress the legacy system PowerEdge 2950 III to determine the optimal performance in terms of userload and TPS, ensuring that there is no bottleneck from a storage perspective as well as that of the host memory. The legacy database was configured with a scale factor Page 4

of 3000 that created a couple of tables with millions of rows. The total database size that resulted with this scale factor was around 290 GB. Initially the backend storage subsystem, consisting of a Dell/EMC CX4 480 storage array, was configured with ten 15K RPM 146GB disks in a RAID 10 configuration. Once populated, we started with 200 concurrent users and increased the userload to 5000 in increments of 200, randomly running transactions against the legacy database while making sure that the AQRT always stayed below 2 seconds. The test methodology used is as follows: 1. To simulate the legacy production environment, a two node Oracle 10g R2 RAC cluster comprised of a PowerEdge 2950 III with quad core, dual socket 3.16 GHz CPU, connected to a Dell EMC CX4-480 storage system configured with a 100 GB LUN for the database SYSTEM, a 400 GB LUN for DATA ASM disk groups, and a 2 GB LUN for the voting and Oracle Cluster Registry (OCR) partitions. 2. Using the Quest Software Benchmark Factory TPCC workload populated the test data with a scale factor of 3000 into the legacy server simulated production environment. 3. After data population, we used the Oracle Data Pump to export data at the schema level and avoid a data reload for each test iteration. expdp system/oracle@racdb1 SCHEMAS=quest CONTENT=all directory=export; 4. We started the first test iteration with a base configuration of 10 disks for the DATA ASM disk group and 200 userload to establish the saturation point of the legacy production environment. We then increased the userload in 200 user increments while constantly monitoring the AQRT. Once the AQRT exceeded 2 seconds, the test was stopped. 5. After each iteration we conducted a host CPU time analysis to determine the limiting factor for host performance. 6. Once the back end spindles were saturated, they start exhibiting large I/O latency. This resulted in large IOWAIT at the host CPU and a large AQRT. To reduce the IOWAIT at the host CPU the number of spindles were increased by 10 disks for the DATA ASM disk group for the next iteration performed. The above methodology was continued till the host CPU was optimally utilized with a smaller IOWAIT time. At the same time, we monitored whether or not we were able to lower the average query response below 2 seconds with a higher userload compared to the earlier iteration. 7. To simulate our test environment, we configured Oracle 11g R2 single node RAC comprising of PowerEdge R810 server populated with two Sockets having eight cores each, to match the total CPU cores to that of the legacy production environment. Using the Quest Software Benchmark Factory, we populated the test data with the same TPCC scale factor that was used for the legacy production environment. 8. The test iterations, similar to the legacy production environment, were carried out within an R810 test environment until we matched the userload to the maximum userload supported on the legacy production environment, with an SLA of 2 seconds AQRT. Figures 2 and 3 below compare transactions per second and AQRT between the legacy production and the R810 test environment using the base configuration of 10 disks RAID 10 ASM disk group. Page 5

Figure 2: Base Configuration TPS Comparison Between Legacy Production and R810 Test Environment 90 80 70 T P S 60 50 40 30 20 10 TPS legacy 10 disks TPS R810 10 disks 0 200 400 600 800 1000 1200 1400 1600 User Load Figure 3: Base Configuration - AQRT Comparison Between Legacy Production and R810 Test Environment A Q R T ( s e c ) 4.5 4 3.5 3 2.5 2 1.5 1 0.5 0 200 400 600 800 1000 1200 1400 1600 1800 User Load Avg Response Time legacy 10 disks Avg Response Time R810 10 disks Page 6

In Figures 2 and 3, it is observed that the legacy production environment exhibits similar performance in terms of transactions per second and the AQRT when compared to the R810 test environment. Do not be misled by these results. Further analysis of the host CPU time in terms of USER time and IOWAIT times revealed that the legacy production environment exhibited a higher USER time to IOWAIT time ratio as compared to the R810 test environment as shown in Figure 4. Figure 4: Base Configuration - CPU Time Comparison Between Legacy Production and R810 Test Environment 80 C P U U t i l i z a t i o n 70 60 50 40 30 20 10 0 average iowait average user time average system time Legacy: 10 Disks R810: 10 Disks The above chart reveals a very interesting fact: in comparison to the legacy production environment the R810 test environment, having the faster CPU and overall more efficient design, was able to handle the OLTP workload much faster, and exhibited a low USER to IOWAIT time ratio as well (0.45 for legacy vs. 0.11 for R810 at 1400 userload). Since both the environments had an identical storage configuration, the reason for higher IOWAIT and lower USER CPU time on the R810 test environment was due to the faster processing power available on that environment as compared to the legacy production environment. Overall, Figure 4 reveals that in order to take advantage of the faster processing power of the R810G test environment, we need to remove the I/O bottleneck to reduce the IOWAIT time. This result led to further tests and analysis, and we decided to verify our conclusions by trying to alleviate some of the I/O bottlenecks from both our legacy production and the 11G test environment by increasing the spindle count in an increment of 10 disks for our DATA disk group. For legacy production environment we continued the iterations by increasing the number of disks until we reached the minimal IOWAIT on host CPU. For this CPU saturation point, we captured the maximum userload supported (with AQRT of 2 seconds). We termed this userload as legacy saturation userload. For new R810 test environment, we performed similar iterations with increasing the number of disks until we reached the legacy saturation userload. Figures 5 and 6 compare the test results for the R810 test environment and the legacy production environment at the legacy saturation userload. Page 7

200 400 600 800 1000 1200 1400 1600 1800 2000 2200 2400 2600 2800 3000 3200 3400 3600 3800 4000 4200 200 400 600 800 1000 1200 1400 1600 1800 2000 2200 2400 2600 2800 3000 3200 3400 3600 3800 4000 4200 Reduce Costs & Increase Oracle Database OLTP Workload Service Levels Figure 5: At Legacy Saturated User Load - TPS Comparison Between Legacy Production and R810 Test Environment 250 At Legacy saturated user load 200 T P S 150 100 50 TPS legacy TPS R810 0 User Load Figure 6: At Legacy Saturated User Load - AQRT Comparison Between Legacy Production and R810 Test Environment A Q R T ( s e c ) 10 9 8 7 6 5 4 3 2 1 0 At Legacy saturated user load TPS legacy TPS R810 User Load As seen in Figures 5 and 6, at the legacy production environments saturated userload the TPS and the AQRT on both the environment are similar. Page 8

Figure 7: At Legacy Saturated User Load - CPU Time Comparison Between Legacy Production and R810 Test Environment 70 At Legacy saturated user load C P U U t i l i z a t i o n 60 50 40 30 20 10 Legacy R810 0 average iowait average user time average system time Analysis of the CPU time in Figure 7 revealed that the CPU user time for the legacy production environment increased drastically and the CPU was optimally utilized for productive work, which was obvious as the CPU IOWAIT time neared zero. Thus we concluded that the legacy production environment will not scale up on further addition of disks and host memory. Also it is observed that the average CPU user time for the R810 test environment is about 40% of the total CPU time, whereas on the legacy production environment it is about 70%. As CPU IOWAIT time on the R810 test environment is about 38% of the total CPU time, this environment can be scaled further by reducing the CPU IOWAIT time and using the same for doing productive work. This can be achieved by further increasing number of disks at the backend. CONSOLIDATION FACTOR Based on above test results, one can conclude that the single node Oracle 11g R2 RAC running on 11G PowerEdge R810 4/2 socket server populated with 2 sockets (with 8 core processors) was able to handle the OLTP workload of a two-node Oracle RAC cluster running on 9G PowerEdge 2950 III servers populated with all 2 sockets (with quad core processors). Thus we can achieve a consolidation factor of 4 when we fully populate R810 server. Using this consolidation factor, we can consolidate an Oracle RAC cluster with many nodes to fewer nodes. For example, if we populate all four sockets of R810 server in a 2 node Oracle RAC setup, then it can accomplish the OLTP workload of a eight-node Oracle RAC running on PowerEdge 2950 III servers, provided both the environments are configured with sufficient host memory and I/O disk subsystems. Page 9

Also as seen from the CPU time analysis graphs (Figure7), on the R810 servers the CPU utilization for productive work was only about 40%. Thus there was enough headroom in the CPU for scaling up further in terms of userload supported for the assumed AQRT of 2 seconds by reducing the CPU IOWAIT time. We may then conclude that if the CPU IOWAIT time on the R810 system is brought down to almost nil, then we can have a higher consolidation factor which will be as high as 7. SUMMARY Database systems running Online Transaction Processing workloads require the optimal backend storage disk layout and disk quantities to efficiently service a large concurrent user population. The legacy servers running these types of workloads have suffered inefficient CPU resource usage due to the architectural limitations. Thus, only a limited number of disks or memory could be serviced by a CPU core in a system. In this white paper we demonstrated that PowerEdge 11G servers equipped with Xeon 7500 Series chipsets for I/O and processor interfacing remove the bottlenecks and provide an ideal platform to consolidate legacy database environments. The R810 chipset is designed to support Intel s Xeon 5700 series processor family, QuickPath Interconnect, DDR3 memory technology, and PCI Express Generation 2. This study also demonstrated that 11G servers offer large performance gains when compared to older generation servers. The database systems running on PowerEdge 11G servers exhibit better scalability when additional resources, such as disks and memory, are added. Customers running Oracle 9i or 10g RAC environments on legacy servers and storage will benefit from the findings in the test methodologies outlined in this white paper to consolidate power hungry RAC nodes into fewer, faster, more energy efficient nodes. As discussed in earlier section, customers can expect a consolidation factor of at least 4 (which can go up to 7) depending on different database usage pattern. The resulting legacy RAC node consolidation can also drive down Oracle licensing costs, resulting in savings that you can use to increase backend storage resources to improve AQRT, implement disaster recovery sites and additional RAC test bed sites for application development and testing. The reduced number of nodes does not compromise performance when paired with PowerEdge 11G servers. The result is less cluster overhead, simplified management, and positive movement toward an objective of simplifying IT and reducing complexity in data centers. Page 10