SAS workload performance improvements with IBM XIV Storage System Gen3

Similar documents
Microsoft Exchange Server 2010 workload optimization on the new IBM PureFlex System

Introduction to IBM System Storage SVC 2145-DH8 and IBM Storwize V7000 model 524

IBM System Storage SAN Volume Controller IBM Easy Tier enhancements in release

Storwize V7000 real-time compressed volumes with Symantec Veritas Storage Foundation

IBM and Lawson M3 (an Infor affiliate) ERP software workload optimization on the new IBM PureFlex System

IBM Scale Out Network Attached Storage (SONAS) using the Acuo Universal Clinical Platform

... IBM Advanced Technical Skills IBM Oracle International Competency Center September 2013

How Smarter Systems Deliver Smarter Economics and Optimized Business Continuity

IBM Power Systems solution for SugarCRM

Using IBM Flex System Manager for efficient VMware vsphere 5.1 resource deployment

Jeremy Canady. IBM Systems and Technology Group ISV Enablement March 2013

IBM System Storage DS8870 Release R7.3 Performance Update

IBM Active Cloud Engine centralized data protection

... Performance benefits of POWER6 processors and IBM i 6.1 for Oracle s JD Edwards EnterpriseOne A performance case study for the Donaldson Company

... WebSphere 6.1 and WebSphere 6.0 performance with Oracle s JD Edwards EnterpriseOne 8.12 on IBM Power Systems with IBM i

IBM System Storage SAN Volume Controller IBM Easy Tier in release

... IBM Power Systems with IBM i single core server tuning guide for JD Edwards EnterpriseOne

jetnexus ALB-X on IBM BladeCenter

Brendan Lelieveld-Amiro, Director of Product Development StorageQuest Inc. December 2012

Infor M3 on IBM POWER7+ and using Solid State Drives

Deploying FC and FCoE SAN with IBM System Storage SVC and IBM Storwize platforms

Oracle s JD Edwards EnterpriseOne IBM POWER7 performance characterization

Lawson M3 7.1 Large User Scaling on System i

... Oracle Database 11g and 12c on IBM Power E870 and E880. Tips and considerations

Infor Lawson on IBM i 7.1 and IBM POWER7+

SAS Enterprise Miner Performance on IBM System p 570. Jan, Hsian-Fen Tsao Brian Porter Harry Seifert. IBM Corporation

IBM XIV Storage System Gen3 and the Microsoft SQL Server I/O Reliability Partner Program

Evaluation Report: Improving SQL Server Database Performance with Dot Hill AssuredSAN 4824 Flash Upgrades

SPC BENCHMARK 2/ENERGY EXECUTIVE SUMMARY IBM CORPORATION IBM XIV STORAGE SYSTEM GEN3 SPC-2/E V1.4

Benefits of the IBM Storwize V7000 Real-time Compression feature with VMware vsphere 5.5

IBM System Storage IBM :

IBM Data Center Networking in Support of Dynamic Infrastructure

... IBM AIX performance and tuning tips for Oracle s JD Edwards EnterpriseOne web server

iseries Tech Talk Linux on iseries Technical Update 2004

IBM System Storage DCS3700

Implementing disaster recovery solution using IBM SAN Volume Controller stretched cluster and VMware Site Recovery Manager

Microsoft SQL Server 2012 Fast Track Reference Configuration Using PowerEdge R720 and EqualLogic PS6110XV Arrays

EMC VMAX 400K SPC-2 Proven Performance. Silverton Consulting, Inc. StorInt Briefing

IBM System Storage DS5020 Express

Evaluation Report: HP StoreFabric SN1000E 16Gb Fibre Channel HBA

IBM Storwize V7000: For your VMware virtual infrastructure

Deploy a High-Performance Database Solution: Cisco UCS B420 M4 Blade Server with Fusion iomemory PX600 Using Oracle Database 12c

Performance of Trinity RNA-seq de novo assembly on an IBM POWER8 processor-based system

1 Revisions. Storage Layout, DB, and OS performance tuning guideline for SAP - V4.4. IBM System Storage layout & performance guideline for SAP

IBM System p5 550 and 550Q Express servers

IBM PowerVM. Virtualization without limits. Highlights. IBM Systems and Technology Data Sheet

Accelerate with ATS Storage ATS Customer Webinar Series. Introducing IBM XIV Gen3

IBM XIV Storage System

Best practices for IBM ILOG CPLEX Optimizer on IBM POWER7 and AIX 7.1

Designing a Reference Architecture for Virtualized Environments Using IBM System Storage N series IBM Redbooks Solution Guide

Computing as a Service

IBM SmartCloud Desktop Infrastructure with VMware View Reference architecture. 12 December 2012

... Characterizing IBM Power Systems POWER7+ and Solid State Drive Performance with Oracle s JD Edwards EnterpriseOne

QLogic/Lenovo 16Gb Gen 5 Fibre Channel for Database and Business Analytics

EMC SYMMETRIX VMAX 40K STORAGE SYSTEM

Microsoft SQL Server in a VMware Environment on Dell PowerEdge R810 Servers and Dell EqualLogic Storage

SUPERMICRO, VEXATA AND INTEL ENABLING NEW LEVELS PERFORMANCE AND EFFICIENCY FOR REAL-TIME DATA ANALYTICS FOR SQL DATA WAREHOUSE DEPLOYMENTS

IBM System Storage SAN Volume Controller Enhanced Stretched Cluster

SAP Applications on IBM XIV System Storage

Extremely Fast Distributed Storage for Cloud Service Providers

IBM FileNet Content Manager and IBM GPFS

Emulex LPe16000B 16Gb Fibre Channel HBA Evaluation

QLogic 16Gb Gen 5 Fibre Channel for Database and Business Analytics

Enterprise file sync and share using Citrix ShareFile and IBM Storwize V7000 Unified system

Exam Questions C

Executive Brief June 2014

Microsoft SQL Server 2012 Fast Track Reference Architecture Using PowerEdge R720 and Compellent SC8000

InfoSphere Warehouse with Power Systems and EMC CLARiiON Storage: Reference Architecture Summary

IBM Storwize V5000 disk system

High performance and functionality

A GPFS Primer October 2005

EMC SYMMETRIX VMAX 40K SYSTEM

IBM System p5 510 and 510Q Express Servers

Best Practices for Deploying a Mixed 1Gb/10Gb Ethernet SAN using Dell EqualLogic Storage Arrays

... HTTP load balancing for Oracle s JD Edwards EnterpriseOne HTML servers using WebSphere Application Server Express Edition

Behind the Glitz - Is Life Better on Another Database Platform?

Dell PowerEdge R720xd with PERC H710P: A Balanced Configuration for Microsoft Exchange 2010 Solutions

Virtualization Selling with IBM Tape

Comparison of Storage Protocol Performance ESX Server 3.5

IBM Storwize V7000 Unified

The Power of PowerVM Power Systems Virtualization. Eyal Rubinstein

IBM Spectrum Scale vs EMC Isilon for IBM Spectrum Protect Workloads

High-Performance Lustre with Maximum Data Assurance

IBM Application Runtime Expert for i

vstart 50 VMware vsphere Solution Specification

IBM SONAS with VMware vsphere 5: Bigger, better, and faster!

QLogic TrueScale InfiniBand and Teraflop Simulations

Technical Paper. Performance and Tuning Considerations for SAS on the Hitachi Virtual Storage Platform G1500 All-Flash Array

FlashGrid Software Enables Converged and Hyper-Converged Appliances for Oracle* RAC

Planning for Easy Tier with IBM System Storage Storwize V7000 and SAN Volume Controller

DELL TM AX4-5 Application Performance

IBM Power Systems Performance Capabilities Reference

IBM System Storage EXP3500 Express

The Value of Using Name-Value Pairs

Assessing performance in HP LeftHand SANs

V6R1 System i Navigator: What s New

DS8880 High Performance Flash Enclosure Gen2

Active Energy Manager. Image Management. TPMfOSD BOFM. Automation Status Virtualization Discovery

Disruptive Forces Affecting the Future

A Pragmatic Path to Compliance. Jaffa Law

Transcription:

SAS workload performance improvements with IBM XIV Storage System Gen3 Including performance comparison with XIV second-generation model Narayana Pattipati IBM Systems and Technology Group ISV Enablement May 2013

Table of contents Abstract...1 Introduction...1 XIV Storage System Gen3...1 How do SAS workloads benefit from enhancements in XIV Gen3?... 3 Deployment example SAS Grid on Power servers with XIV Gen3 and GPFS...4 Configuration and tuning guidelines... 4 I/O performance with XIV Gen3...5 Benchmark workload details... 5 SAS workload I/O performance... 5 40-session workload benchmark results... 5 80-session workload benchmark results... 7 Grid and workload scalability... 8 Summary of performance comparison between XIV second-generation and XIV Gen3 models... 9 SSD cache usage for grid workloads... 11 Summary...12 Resources...12 Acknowledgements...13 About the author...13 Trademarks and special notices...14

Abstract This paper describes the feature enhancements in IBM XIV Storage System Gen3 (software release 11.1.1) and how these enhancements can help improve I/O performance for SAS business analytics workloads. The paper also demonstrates how SAS Grid is deployed on IBM Power servers with XIV Gen3 and IBM General Parallel File System (GPFS), along with I/O performance data for SAS Grid workloads and comparison of I/O performance with IBM XIV Storage System second-generation model. Introduction SAS analytics provide an integrated environment for predictive and descriptive modeling, data mining, text analytics, forecasting, optimization, simulation, experimental design, and more. The IBM Power Systems family of servers includes proven workload consolidation platforms that can help clients to control costs while improving overall performance, availability, and energy efficiency. With these servers and IBM PowerVM virtualization solutions, an organization can consolidate large numbers of applications and servers, fully virtualize its system resources, and provide a more flexible and dynamic IT infrastructure. The IBM XIV Storage System is a high-end disk storage system designed to provide consistent, enterprise performance and exceptional ease of use. IBM XIV Storage System Gen3, the latest generation of the system (at the time of this publication), can provide improved performance for clients' most-demanding application workloads, such as virtualization, analytics, and cloud computing. The IBM Power servers with IBM GPFS and IBM XIV Storage System Gen3 provide an integrated analytical environment that can support the business analytics needs of the most-demanding organizations. This technical white paper describes the new features and enhancements delivered with XIV Gen3 release 11.1.1, and how the enhancements help SAS workloads in terms of I/O performance improvements. The paper also describes detailed I/O performance results for SAS workloads on XIV Gen3, along with a comparison of the results with XIV second-generation model. The benchmark results published in the paper reflect a SAS Grid Computing environment deployed on IBM Power 780 servers with GPFS and XIV Gen3. This paper is an extension to the detailed white paper SAS 9.3 grid deployment on IBM Power servers with IBM XIV Storage System and IBM GPFS published in 2012 at: ibm.com/support/techdocs/atsmastr.nsf/webindex/wp102192. Refer to the detailed paper for information about deployment models, configuration, and tuning guidelines. XIV Storage System Gen3 The IBM XIV Storage System is a high-end, easily scalable, enterprise disk storage system designed to deliver performance, scalability, and ease of management. The XIV Gen3 (Model 114) system adds many new features and enhancements that deliver improved I/O performance over the second-generation model (Model A14). Figure 1 summarizes a technical comparison between both the models. 1

Figure 1: Technical comparison between XIV Gen3 and XIV second-generation models Note: As of 26 October 2012, the XIV second-generation system (Model A14) is no longer available for sale. The following list summarizes some of the key XIV Gen3 features and enhancements that play a key role in I/O performance improvements. InfiniBand interconnect between modules provides 20 times internal bandwidth, compared to XIV second-generation model New interface and data modules - More processing power and system throughput - Total 60 cores, 120 hyper-threads per rack Fifty percent more cache capacity compared to XIV second-generation model - 24 GB cache (DDR3) per module (total 360 GB for full rack) - Significantly improves cache performance even without solid-state drive (SSD) caching Over two times external bandwidth compared to XIV second-generation model - 8Gb Fibre Channel (FC) and 1Gb iscsi on all interface modules - Up to twenty-four 8Gb FC ports and 22 iscsi 1Gb ports 2

SSD cache for improving random read I/O - Optional (priced) SSD cache - 400 GB SSD per module - 6 TB of SSD cache for full rack Figure 2: XIV Gen3 components How do SAS workloads benefit from enhancements in XIV Gen3? SAS analytics applications perform large-block sequential read and write operations. Some of the new SAS business intelligence (BI) applications perform some random access of data but, for the most part, the SAS workload is characterized predominately by large-block sequential I/O requests with high volumes of data. The following enhancements to XIV Gen3 help to improve I/O performance for SAS workloads: InfiniBand interconnect between modules DDR3 memory cache of 360 GB (compared to 240 GB DDR2 cache in XIV second-generation model) 8Gb FC external connectivity (compared to 4Gb FC external connectivity in XIV secondgeneration model) 6 TB of SSD cache (for random I/O) 3

Deployment example SAS Grid on Power servers with XIV Gen3 and GPFS This section describes SAS Grid deployment on IBM Power servers with XIV Gen3 and GPFS, as an example. The deployment architecture is provided in detail in the following figure. Figure 3: SAS Grid deployment architecture on IBM Power servers with XIV Gen3 and GPFS Configuration and tuning guidelines XIV Gen3 configuration details: - XIV machine type: 2810 - Machine model: 114 - System version: 11.1.1 - Drives: 180 SAS drives each with 2 TB capacity and 7200 rpm speed - Usable capacity : 161 TB - Modules: 15 - Cache: DDR3 360 GB - SSD cache: 6 TB - Six 8Gb dual-port FC adapters (12 ports) connected to storage area network (SAN) - Stripe size: 1 MB (default) - SSD cache enabled(by default), for all volumes used in benchmark 4

Configuration and tuning guidelines remain same for the rest of the stack. For more details about deployment models, configuration and tuning guidelines, refer to the SAS 9.3 grid deployment on IBM Power servers with IBM XIV Storage System and IBM GPFS paper at: ibm.com/support/techdocs/atsmastr.nsf/webindex/wp102192. The paper includes the configuration of the XIV second-generation model used in the earlier benchmark. I/O performance with XIV Gen3 As part of the benchmark, SAS mixed analytics workload was run on a SAS Grid deployed on an IBM Power 780 server with XIV Gen3 and GPFS. The deployment model is shown in Figure 3. This section describes the benchmark results with XIV Gen3 in detail, and compares the results with the XIV secondgeneration model. Benchmark workload details The benchmark test used the grid to simulate the workload of a typical SAS Foundation customer using Grid Computing. The goal was to evaluate the multiuser performance of the SAS software on IBM Power servers with XIV Gen3 and GPFS. The workload generates an emphasis on I/O, which SAS software uses heavily while running a typical SAS Foundation program. Details of the workload used during the benchmark are described in the SAS 9.3 grid deployment on IBM Power servers with IBM XIV Storage System and IBM GPFS paper at ibm.com/support/techdocs/atsmastr.nsf/webindex/wp102192. The performance measurement for these workloads is response time (in seconds), which is the cumulative processor usage time (user + sys) of all the jobs in the workload. Lower response time is better, and the workload is said to have performed better. Throughout this paper, this metric is referred to as workload response time. SAS workload I/O performance The benchmark testing used the following workload scenarios: 40-session workload 80-session workload Grid and workload scalability 40-session workload benchmark results The following list provides a summary of the 40-session grid workload on the deployment architecture (as shown in Figure 3). Peak I/O throughput of 4.7 GBps and sustained throughput of 4.0 GBps for XIV Equivalent I/O throughput of around 285 MBps per core, considering the grid is deployed on 14 cores Average latency (response time) is 6 ms Workload response time (for 144 jobs) is 37795 seconds Average grid processor usage is 45%, processor wait is 5%, and processor idle is 50% during the workload run Power 780 server is at 80% utilization at peak throughput 5

Comparison of performance metrics for a 40-session grid workload, between XIV second-generation and XIV Gen3 models: 210% improvement in IO throughput 86% reduction in disk response time (latency) Cache hit improved from 15% (XIV second-generation model) to 37% (XIV Gen3) 45% reduction in workload response time Figure 4: 40-session workload comparison of performance metrics between the two XIV models Figure 5: 40-session grid workload throughput for XIV and GPFS (also shows XIV latency) 6

Figure 4 provides detailed performance metrics for the 40-session workload. The figure also shows a comparison of the performance metrics for the workload between XIV second-generation and XIV Gen3 models. Figure 5 shows the comparison of the throughput for XIV and GPFS for the 40-session workload; the differences in the throughput are attributed to some of the I/O serviced from the GPFS pagepool. The pagepool is the GPFS cache of dedicated and pinned memory on each node; this cache allows clientside caching (user data and file system metadata) and offers performance gain by not having to transfer data to and from the disk. Figure 5 also shows the disk access latency for the workload. Figure 6 shows the processor usage trend for the workload. The Power 780 server is at 80% utilization at peak throughput. Figure 7 shows a comparison of the throughput and latency for the 40-session workload between the XIV models. Figure 6: 40-session grid workload processor usage for the grid Figure 7: 40-session grid workload comparison of throughput and latency between the XIV models 80-session workload benchmark results The following list provides a summary of benchmark results for the 80-session grid workload. Peak I/O throughput of 5.3 GBps and sustained throughput of 4.3 GBps at XIV Equivalent I/O throughput of around 305 MBps per core, considering grid deployed on 14 cores Average latency (response time) is 8.5 ms 7

Workload response time (for 288 jobs) is 120656 seconds Average grid processor usage is 76%, processor wait is 3%, and processor idle is 21% during workload Power 780 server is at 95% utilization at peak throughput. Comparison of performance metrics for an 80-session grid workload between XIV second-generation and XIV Gen3 models: 200% improvement in I/O throughput (sustained throughput of 1450 MBps with XIV secondgeneration model compared to 4275 MB/sec with XIV Gen3) 86% reduction in disk response time (latency) (60 ms with XIV second-generation model compared to 8.5 ms with XIV Gen3) 49% reduction in workload response time (120656 seconds with XIV Gen3 compared to 235065 seconds with XIV second-generation model) Figure 8 shows the processor usage for the grid (14 cores allocated to the four grid nodes) for the 80-session workload. As shown in the figure, the Power 780 server is at 95% utilization at peak throughput. Figure 8: 80-session grid workload processor usage for the grid Grid and workload scalability The benchmark test exercises the scalability of the SAS Grid on IBM Power servers with XIV Gen3 and GPFS. As the number of sessions grows, the grid is expanded by adding additional nodes. Starting with one node and 10 workload sessions, the grid scalability is measured by adding additional nodes and running additional workload sessions: 20 sessions on two nodes, 30 sessions on three nodes, and 40 sessions on four nodes. As a node is added, the processing capacity of the grid increases; with increased processing capacity, when additional workload sessions are scheduled (on a one node = 10 sessions basis), the overall workload performance (workload response time) should be linear. Note that the backend XIV storage capacity and FC connectivity [N_Port ID Virtualization (NPIV) through Virtual I/O Server (VIOS)] from the Power 780 server to the SAN fabric remains the same as nodes are added to the SAS Grid. Figure 9 shows the scalability of SAS Grid on the IBM Power 780 server using XIV Gen3. As shown in the figure, the grid scaled linearly as nodes are scaled from one to four and workload sessions incremented from 10 to 40. 8

Figure 9: Scalability of grid on the Power 780 server as nodes are added and additional jobs scheduled Summary of performance comparison between XIV second-generation and XIV Gen3 models This section summarizes the benchmark performance comparison between XIV Gen3 and XIV secondgeneration models, based on the following metrics: Workload response time (seconds) Throughput at XIV (MBps) Latency (disk response time) at XIV (ms) The workloads are: 10 sessions on one node; 20 sessions on two nodes; 30 sessions on three nodes 40 sessions on four nodes 60 and 80 sessions on four nodes Note that the capacity of XIV and FC connectivity between the Power 780 server and the SAN fabric remain the same irrespective of the number of sessions for the grid workload. Also, the 60 and 80 sessions were run on the four-node grid without adding any additional capacity (such as processors, memory or I/O). The four-node grid resources remain the same for 40, 60, and 80 session workloads. As shown in Figure 10, the grid performs much better on XIV Gen3 across the workloads. The workload response time improvement varies from 40% to 49%, depending on the workload. As shown in Figure 11, the I/O throughput improvement varies from 110% to 210% depending on the workload, and the latency (disk response time) is reduced by about 85% across the workloads, as shown in Figure 12. The throughput achieved during the testing with XIV Gen3, on the deployment architecture described in Figure 3, is a factor of user workload characteristic. The infrastructure (with XIV Gen3) can handle higher throughput depending on the workload. Refer to the Storage Performance Council SPC-2/E benchmark results for XIV Gen3 at: http://www.storageperformance.org/benchmark_results_files/spc- 2E/IBM/BE00001_IBM_XIV/be00001_IBM%20_XIV-Storage-System_SPC2E_full-disclosure.pdf for more details. 9

Figure 10: Grid workload response time comparison between the two XIV models Figure 11: Throughput comparison between the two XIV models Figure 12: Latency comparison between the two XIV models 10

SSD cache usage for grid workloads SSD caching is enabled, by default, for all volumes newly created, when SSD caching is enabled globally at the XIV system level. During the grid benchmark on XIV Gen3, SSD caching was enabled for all 16 volumes used for SASWORK and SASDATA file systems. The SAS Grid workload used in the benchmark is meant to reflect a real-world SAS user environment, and is predominantly a large-block (greater than 512 KB) sequential I/O, with a read/write ratio of 55 to 45. As this particular workload does not take advantage of the XIV enhanced SSD caching, the SSD cache hit is found to be negligible for this benchmark. As a workload begins to reflect the favorable characteristics (listed below in this section) for SSD cache, the performance improvements will be much higher, as XIV Gen3 has up to 6 TB of extended SSD cache. Typical workload characteristics that are a best fit for effective SSD cache usage (note that SSD cache in XIV Gen3 is used only for read operations with less than 64 KB transfer size): Highly random reads Read/Write ratio of 80 to 20 Small block (less than 64 KB) Figure 13 illustrates the cache hit for the 40-session workload. In the figure, the graph at the top shows Total Read Cache Hit for the workload, and the graph at the bottom shows SSD Hit and Memory Hit. The Memory Hit parameter indicates read I/O from the main dynamic random-access memory (DRAM) cache. The SSD Hit parameter indicates read I/O from the extended SSD cache. Sum of Memory Hit and SSD Hit equals the Total Read Cache Hit (shown in the graph at the top of the figure). In the figure, SSD cache hit appears negligible for a 40-session workload. For example, with peak throughput, the Total Read Cache Hit is 1948 MBps, out of which DRAM Memory hit is 1940 MBps; and SSD hit is just 8 MBps, which is less than 0.5% of the Total Read Cache Hit. SSD Cache Hit is found to be negligible for the 10-, 20-, 30-, 60-, and 80-session workloads as well. Figure 13: Memory and SSD cache hit for a 40-session workload seen from the XIV GUI 11

Summary The IBM XIV Storage System Gen3 is well-suited for running SAS workloads and comes with many new features and enhancements that can provide significant I/O performance improvements. The paper demonstrates the performance improvements for SAS workloads when compared to the earlier version of XIV. Resources The following links provide useful references to supplement the information contained in the paper. IBM Power Systems Information Center http://publib.boulder.ibm.com/infocenter/pseries/index.jsp Power Systems on IBM PartnerWorld ibm.com/partnerworld/systems/p AIX on IBM PartnerWorld ibm.com/partnerworld/aix SAS 9.3 grid deployment on IBM Power servers with IBM XIV Storage System and IBM GPFS ibm.com/support/techdocs/atsmastr.nsf/webindex/wp102192 Grid Computing in SAS 9.3 http://support.sas.com/documentation/cdl/en/gridref/64808/pdf/default/gridref.pdf SAS Grid Computing http://www.sas.com/resources/factsheet/sas-grid-computing-factsheet.pdf IBM XIV Storage System Gen3 Architecture, Implementation, and Usage ibm.com/redbooks/abstracts/sg247659.html Solid-State Drive Caching in the IBM XIV Storage System ibm.com/redbooks/redpieces/abstracts/redp4842.html XIV Gen3 Grid Storage ibm.com/common/ssi/cgi-bin/ssialias?infotype=sa&subtype=wh&htmlfid=tsw03178chen Storage Performance Council SPC-2/E Benchmark results for XIV Gen3 - http://www.storageperformance.org/benchmark_results_files/spc- 2E/IBM/BE00001_IBM_XIV/be00001_IBM%20_XIV-Storage-System_SPC2E_fulldisclosure.pdf - http://www.storageperformance.org/benchmark_results_files/spc-2/fujitsu_spc- 2/B00063_Fujitsu_DX8700-S2/b00063_Fujitsu_DX8700-S2_SPC2_executive-summary.pdf 12

Acknowledgements The author would like to thank the following team members for their help, guidance, and technical review of the paper: Frank Battaglia, Technology Manager, IBM Certified IT Specialist, IBM Systems and Technology Group Harry Seifert, Executive IT Specialist, IBM Sales and Distribution, TSS Bill Watson, ISV Storage Business Development Manager, IBM Systems and Technology Group, Systems Software Jerry Heyman, Technical Consultant - System p / AIX, IBM Systems and Technology Group David Quenzler, Technical Solution Architect, IBM Systems and Technology Group Brian Porter, Senior IT Specialist, IBM Systems and Technology Group Ling Pong, Certified IT Specialist, IBM Advanced Technical Skills Solutions Randy J. George, XIV ISV Enablement Project Manager, IBM Systems and Technology Group Yijie Zhang, Integrated Performance Solution Analyst, IBM Systems and Technology Group Diane Benjuya, Market Segment Manager, Marketing communications for XIV, IBM Systems and Technology Group Tony Brown, Principal Software Developer, SAS Performance Lab Research and Development About the author Narayana Pattipati works as a technical consultant in the IBM Systems and Technology Group ISV Technical Enablement team. He has more than 12 years of experience in the IT industry. He currently works with software vendors to enable their software solutions on IBM platforms. You can reach Narayana at npattipa@in.ibm.com. 13

Trademarks and special notices. References in this document to IBM products or services do not imply that IBM intends to make them available in every country. IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. If these and other IBM trademarked terms are marked on their first occurrence in this information with a trademark symbol ( or ), these symbols indicate U.S. registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at "Copyright and trademark information" at www.ibm.com/legal/copytrade.shtml. INFINIBAND, InfiniBand Trade Association and the INFINIBAND design marks are trademarks and/or service marks of the INFINIBAND Trade Association. Other company, product, or service names may be trademarks or service marks of others. Information is provided "AS IS" without warranty of any kind. All customer examples described are presented as illustrations of how those customers have used IBM products and the results they may have achieved. Actual environmental costs and performance characteristics may vary by customer. Information concerning non-ibm products was obtained from a supplier of these products, published announcement material, or other publicly available sources and does not constitute an endorsement of such products by IBM. Sources for non-ibm list prices and performance numbers are taken from publicly available information, including vendor announcements and vendor worldwide homepages. IBM has not tested these products and cannot confirm the accuracy of performance, capability, or any other claims related to non-ibm products. Questions on the capability of non-ibm products should be addressed to the supplier of those products. All statements regarding IBM future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. Contact your local IBM office or IBM authorized reseller for the full text of the specific Statement of Direction. Some information addresses anticipated future capabilities. Such information is not intended as a definitive statement of a commitment to specific levels of performance, function or delivery schedules with respect to any future products. Such commitments are only made in IBM product announcements. The information is presented here to communicate IBM's current investment and development activities as a good faith effort to help with our customers' future planning. Performance is based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput or performance that any user will experience will vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve throughput or performance improvements equivalent to the ratios stated here. 14

Any references in this information to non-ibm websites are provided for convenience only and do not in any manner serve as an endorsement of those websites. The materials at those websites are not part of the materials for this IBM product and use of those websites is at your own risk. 15