IBM i Virtual I/O Performance in an IBM System Storage SAN Volume Controller with IBM System Storage DS8000 Environment This document can be found in the IBM Techdocs library, www.ibm.com/support/techdocs Search for document number WP101435 under the category of White papers. Version 1.0 February 16 th, 2009 IBM ATS System Storage Europe Ingo Dimmer IBM Rochester Development Henry May http://www.ibm.com/support/techdocs/atsmastr.nsf/webindex/wp101435 Page 1/13
Purpose This technical White Paper discusses performance characterization and configuration best practices of IBM i in a virtual I/O high-end external storage environment with IBM System Storage DS8000 attached natively, through the IBM PowerVM Virtual I/O Server (VIOS) and through VIOS and the IBM System Storage SAN Volume Controller (SVC). The discussion is based on measurements performed in the IBM Systems Lab Europe in Mainz, Germany in collaboration with the IBM Rochester, Tucson and Hursley development labs. Acknowledgements Many thanks to the following IBM colleagues for their support and guidance in this project: Don Pischke, Henry May, Wes Varela IBM Rochester development lab Lee La Frese, Joe Hyde IBM Tucson Performance lab Dave Sinclair, Barry Whyte, Nick O Rourke IBM Hursley development lab http://www.ibm.com/support/techdocs/atsmastr.nsf/webindex/wp101435 Page 2/13
Disclaimer Notice & Trademarks THE INFORMATION PROVIDED IN THIS DOCUMENT IS DISTRIBUTED "AS IS" WITHOUT ANY WARRANTY, EITHER EXPRESS OR IMPLIED. IBM EXPRESSLY DISCLAIMS ANY WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NON-INFRINGEMENT. IBM shall have no responsibility to update this information. The performance data contained herein was obtained in a controlled, isolated environment. Actual results that may be obtained in other operating environments may vary significantly. While IBM has reviewed each item for accuracy in a specific situation, there is no guarantee that the same or similar results will be obtained elsewhere. The provision of the information contained herein is not intended to, and does not, grant any right or license under any IBM patents or copyrights. Inquiries regarding patent or copyright licenses should be made, in writing, to: IBM Director of Licensing IBM Corporation North Castle Drive Armonk, NY 10504-1785 U.S.A. IBM, the IBM logo, FlashCopy, POWER6, POWER Hypervisor, Power Systems, PowerVM, Redbooks, System Storage, System i, System x and i5/os are trademarks of International Business Machines Corporation in the United States, other countries, or both. Other company, product and service names may be trademarks or service marks of others. http://www.ibm.com/support/techdocs/atsmastr.nsf/webindex/wp101435 Page 3/13
Table of Contents Purpose 2 Acknowledgements 2 Disclaimer Notice & Trademarks 3 1 Introduction 5 1.1 Virtual I/O Server and SAN Volume Controller Architecture 5 1.2 Virtual I/O Advantages 6 1.3 Best Practices for Implementing and Using Virtual I/O 7 2 Performance Case Studies 8 2.1 Configuration Overview 8 2.2 Transaction Workload Performance 8 2.3 Sequential Workload Performance 11 3 Summary and Conclusions 12 4 References 13 http://www.ibm.com/support/techdocs/atsmastr.nsf/webindex/wp101435 Page 4/13
1 Introduction Since release of IBM i 6.1, formerly known as i5/os V6R1, in March, 2008 IBM i is supported as a client of the IBM PowerVM Virtual I/O Server (VIOS) which enables a new variety of SAN external storage solutions for IBM i POWER6 systems. IBM i is also now supported as a virtual I/O client of VIOS since the SVC 4.3.1 release of the IBM System Storage SAN volume controller (SVC). This means there is now a huge selection of IBM i external storage options, including both IBM and non IBM external storage. This introductory section provides an overview of VIOS and SVC virtual I/O architecture and their advantages over traditional native SAN storage attachment to IBM i. 1.1 Virtual I/O Server and SAN Volume Controller Architecture With IBM i as a client of VIOS on IBM Power Systems POWER6 servers IBM i can be attached to common 512 bytes per sector oriented storage systems with or without the IBM System Storage SAN Volume Controller (SVC), i.e. the storage system is not required to support native IBM i 520 bytes per sector storage. The IBM POWER Hypervisor (PHYP) performs the sector conversion of IBM i traditional 8x 520 byte sectors into 9x 512 byte sectors of a 4 KB memory page as shown in Figure 1. Virtual I/O Server IBM i Virtual I/O Client device driver DS Storage System or SVC FC adapter FC adapter multi-path driver hdisk #1 VSCSI server adapter 1... hdisk #n VSCSI server adapter 2 6B22 virtual LUN VSCSI client adapter 1... 6B22 virtual LUN VSCSI client adapter 2 PHYP 8 <-> 9 sector conversion Figure 1: IBM i Client of the Virtual I/O Server The IBM System Storage SAN Volume Controller (device type 2145) is a block level in-band storage virtualization system deployed as a cluster of pairs of nodes that are modified IBM System x servers running specialized virtualization storage software. Each node has four 4Gb Fibre Channel ports which must be connected to a (redundant) SAN switch environment for attaching to backend storage systems, to host servers and for SVC node intra-cluster communication. With SVC the SCSI logical units (LUNs) provided by the attached backend storage systems like IBM System Storage DS8000, called managed disks (Mdisks) in SVC terminology, are grouped in so called Mdisk groups (MDGs). The MDGs provide storage capacity in a granularity of extents (extent size is user-selectable from 16 MB to 2 GB) which are used by http://www.ibm.com/support/techdocs/atsmastr.nsf/webindex/wp101435 Page 5/13
the SVC storage virtualization software for creating virtual disks (Vdisks) presented to the hosts like VIOS as shown in Figure 2. IBM i VIO Client 1 Virtual I/O Server 1 IBM i VIO Client 2 Virtual I/O Server 2 Host Systems Virtual disks Managed disk groups Managed disks VD 1 VD 2 VD 3 VD 4 VD 5 VD 6 Virtual-to-physical Mapping MDG 1 MDG 2 MD 1 MD 2 MD 3 MD 4 MD 5 MD 6 MD 7 MD 8 SAN Volume Controller SCSI LUNs LUN 1 LUN 2 LUN 3 LUN 4 LUN 1 LUN 2 LUN 3 LUN 4 IBM DS8000 IBM XIV Backend Storage Note: Connections shown are of generic type not representing actual SAN connections. Figure 2: SAN Volume Controller Storage Virtualization 1.2 Virtual I/O Advantages Using VIOS with or without SVC for attaching IBM i to external storage adds additional layer(s) in the I/O path compared to IBM i native external storage attachment. The virtual I/O performance measurement results in section 2 show that there should be no need to worry about the added latency by VIOS especially if using SVC with its additional caching. Additional storage management complexity by the added appliances of VIOS and SVC can easily be outweighed by the following advantages typically available only in a virtual I/O environment: Advantages of using IBM Virtual I/O Server server consolidation (virtual SCSI, virtual Fibre Channel 1, Shared Ethernet Adapter) single multi-path device driver per multiple virtual I/O clients enables advanced functions like Live Partition Mobility (currently not supported for IBM i), NPIV 1, virtual tape 1 and Active Memory Sharing 1 Advantages of using IBM System Storage SAN Volume Controller non-disruptive storage migrations over a wide range of IBM and non IBM storage Copy Services between storage systems of different type or vendor 1 Statement of Direction: - IBM intends to support NPIV with IBM i and Linux environments in 2009. - IBM intends to support VIOS virtual tape capabilities with IBM i and Linux environments in 2009. - IBM intends to enhance PowerVM with Active Memory Sharing, an advanced memory virtualization technology, in 2009. http://www.ibm.com/support/techdocs/atsmastr.nsf/webindex/wp101435 Page 6/13
usage-based vs. installed capacity Copy Services licenses enables efficient/simple deployment of tiered storage centralized storage management scalable, investment-protective performance (up to 8 nodes per cluster) 1.3 Best Practices for Implementing and Using Virtual I/O The following lists some configuration and performance considerations when using virtual I/O for IBM i external storage attachment which can be considered best practices for implementation and usage: Use a dedicated processor for VIOS and 1-2 GB memory for IBM i performance critical workload - As a sizing rule of thumb a single dedicated POWER6 processor for VIOS is good enough for about 40,000 virtual SCSI I/O Use separate virtual SCSI client adapters for IBM i disks and CD or tape drives - This helps to avoid disk I/O impacts when a CD or tape (virtual) IOP needs to be disabled to release a shared CD drive or reset after a tape configuration change Up to 16 virtual disk LUNs are supported per IBM i virtual SCSI adapter - Multiple virtual SCSI adapters can be created on HMC managed systems for attaching more than 16 disk LUNs If using separate IBM i Auxiliary Storage Pools (ASPs) with dedicated disk arms in backend storage, use a separate virtual SCSI adapters on VIOS for separate IBM i ASPs - Helps to more easily identify virtual LUN (type 6B22 with random S/N) to ASP association on IBM i - Determine the correct disk units to be added to a certain ASP based on Display Disk Unit Details output (Sys Card = VSCSI slot, Ctl XOR 0x80 = LUN ID on VIOS) Stopping SVC FlashCopy puts the target volume in inaccessible state for the host - Delete the FlashCopy relationship if no longer used to prevent DISK OPERATION ERRORs on VIOS For SVC configure a single Mdisk per storage system RAID rank - Helps to avoid issues with overloaded RAID ranks due to SVC queuing algorithm For SVC attachment limit the number of VIOS host paths to a recommended number of 4 paths per volume - SVC supports a maximum of 8 paths per Vdisk however using 4 paths showed better I/O performance and shorter path failover times - Implement by creating multiple zones for VIOS host initiators with a subset of SVC ports http://www.ibm.com/support/techdocs/atsmastr.nsf/webindex/wp101435 Page 7/13
2 Performance Case Studies This section shows the results of some IBM i POWER6 virtual I/O performance tests obtained in an IBM System Storage DS8300 lab environment with and without the IBM System Storage SAN Volume Controller: Section 2.1 provides information about the IBM i external storage configurations used for conducting the performance tests. Section 2.2 shows the IBM i virtual I/O performance results obtained from running Commercial Processing Workload (CPW) transaction workload including a native attached DS8000 as a baseline. Section 2.3 augments the transaction workload results for virtual I/O performance by measurements performed for sequential workload generated by using IBM i virtual tape. 2.1 Configuration Overview The following lists the IBM i external storage configurations used for conducting the performance tests: IBM i 6.1 partition using 84 GB RAM, 7 dedicated processors from IBM i570 POWER6 server (9406-MMA), 104x 70 GB LUNs (20 LUNs in system ASP, 56 LUNs in DB ASP, 28 LUNs in journal ASP) VIOS 2.1 partition using 1 GB memory, 1 dedicated POWER6 processor, SDDPCM multi-pathing 2 PCIe 4Gb Dual-Port Fibre Channel Adapters (#5774; 2 ports for DB ASP, 2 ports shared between system and journal ASP) IBM System Storage DS8300 Turbo R3.1/R4.2, 128 GB cache, 128x 146 GB 15k FC drives (system ASP: 32 drives, DB ASP: 64 drives, journal ASP: 32 drives) configured in 16x RAID10 ranks IBM System Storage SAN Volume Controller V4.3.1, 2x 8G4 nodes, 8 GB cache SVC attached to DS8300 via each I/O drawer (4 paths in total) with 16 Mdisks, i.e. one LUN per RAID rank, configured for SVC 2.2 Transaction Workload Performance The transaction workload performance measurements were performed using the IBM i Commercial Processing Workload (CPW) with ramping up to 144k simulated DB warehouse users in 16k user increments. Figure 3 shows the CPW application response time the user would see in average for the corresponding CPW transaction workload. http://www.ibm.com/support/techdocs/atsmastr.nsf/webindex/wp101435 Page 8/13
CPW Application Response Time (Milliseconds) 50 40 30 20 10 0 0 50000 100000 150000 200000 Throughput (Transactions /Minute) DS8k 3.1 native DS8k 3.1 via VIOS DS8k 4.2 via VIOS & SVC Figure 3: CPW Application Response Time Comparing the native attached run (red line) with the VIOS attached run (blue line) in Figure 3 shows that introducing VIOS into the configuration causes a slight increase in application response time due to some latency added by VIOS. However comparing the native as well as the VIOS attached run with the VIOS & SVC run (green line) it becomes evident that with introducing SVC into the configuration we can achieve superior performance better than with native DS8300 attachment. SVC introduces another storage I/O cache hierarchy and can take advantage of disk seek-optimized one LUN per rank DS8300 configuration enabling it to over-compensate the latency added by VIOS. Figure 4 shows the corresponding disk response time for the database ASP for the CPW transaction workload. CPW DB Disk Response Time (Milliseconds) 12 10 8 6 4 2 0 0 5000 10000 15000 20000 25000 30000 Throughput (IO/Sec) DS8k 3.1 native DS8k 3.1 via VIOS DS8k 4.2 via VIOS & SVC Figure 4: CPW Database Disk Response Time Looking at the native run versus the VIOS or the VIOS & SVC run shows that of all three configurations the native attached configuration uses the fewest I/O for maintaining the CPW transaction workload. The native attached configuration can leverage the SCSI Skip Read and Skip Write operations natively supported by the DS8000 while for the configurations using VIOS which does not support skip I/Os (several individual SCSI Reads or Writes aggregated using a skip mask) they are emulated. With IBM i being a client of VIOS IBM i storage management emulates each skip I/O by sending several individual SCSI Read or Write commands to VIOS. http://www.ibm.com/support/techdocs/atsmastr.nsf/webindex/wp101435 Page 9/13
When we look at the data rate for the database ASP instead of the I/O throughput as shown in Figure 5 the graphs for the three different configurations are aligned much closer because regardless of the configuration the same amount of user data is transferred. CPW DB Disk Data Rate Response Time (Milliseconds) 12 10 8 6 4 2 0 0 50 100 150 200 Throughput (MB/Sec) DS8k 3.1 native DS8k 3.1 via VIOS DS8k 4.2 via VIOS & SVC Figure 5: CPW Database Disk Data Rate The IBM i journal ASP disk response time for the CPW transaction workload is shown in Figure 6. CPW JO Disk Response Time (Milliseconds) 2 1,8 1,6 1,4 1,2 1 0,8 0,6 0,4 0,2 0 0 500 1000 1500 2000 2500 3000 Throughput (IO/Sec) DS8k 3.1 native DS8k 3.1 via VIOS DS8k 4.2 via VIOS & SVC Figure 6: CPW Journal Disk Performance As Figure 6 shows with each higher step of simulated CPW users represented by each higher data point on the curve less additional journal I/O occurs and at the last two or three data points the total journal I/O workload even decreases. Considering that the journal data rate increases almost linearly as shown in Figure 7 the decrease of the journal I/O is due to an increase of the journal average I/O size caused by journal bundling performed by IBM i. http://www.ibm.com/support/techdocs/atsmastr.nsf/webindex/wp101435 Page 10/13
Response Time (Milliseconds) 2 1,8 1,6 1,4 1,2 1 0,8 0,6 0,4 0,2 0 CPW JO Disk Data Rate 0 20 40 60 80 100 Throughput (MB/Sec) DS8k 3.1 native DS8k 3.1 via VIOS DS8k 4.2 via VIOS & SVC Figure 7: CPW Journal Disk Data Rate Comparing the journal performance for the three different configurations as shown in Figure 7 the SVC adds latency at low utilizations compared to the native or VIOS run but clearly journal response time improvements are achieved with the SVC additional write cache as the workload increases. 2.3 Sequential Workload Performance To evaluate the IBM i virtual I/O performance for sequential workload a sequential disk I/O workload was generated by saving to IBM i virtual tape. Two batch jobs performed a concurrent save of 698 objects, worth 2.0 TB of data, from large user libraries in the database ASP to two 1 TB virtual tape volumes residing in an image catalog in the same database ASP. Figure 8 shows the results of the sequential disk I/O performance in terms of duration of the save batch jobs for the DS8300 native-attached, VIOS-attached and VIOS & SVC-attached configurations. Virtual Tape Performance 100 90 80 Duration [min.] 70 60 50 40 30 20 10 0 DS8k native DS8k via VIOS DS8k via VIOS & SVC Figure 8: Sequential Workload Virtual Tape Performance http://www.ibm.com/support/techdocs/atsmastr.nsf/webindex/wp101435 Page 11/13
For this high-bandwidth sequential save workload comprised of 32 KB read and write transfers both VIOS and SVC add latency and for which the additional caching layer of SVC provides no benefit because of the already high cache hit ratios achieved by the backend storage system. 3 Summary and Conclusions The results of this IBM i virtual I/O performance case study show that using virtual I/O configurations for IBM i external storage are feasible solutions from a performance perspective. The old paradigm that native-attached external storage would clearly provide the best SAN storage performance has been negated by the superior transaction workload performance results achieved with the IBM SAN Volume Controller (SVC) surpassing the native-attached and VIOS-attached high-end storage configurations without SVC. Native-attached external storage still provided the best performance for sequential workload like save/restore. Based on these SVC performance measurements with IBM System Storage DS8000 high-end storage we think the performance benefit SVC with its additional cache can provide for IBM i transaction workload would be even more significant in lower performing midrange storage system environments. With IBM i 6.1 on POWER6 systems customers can take advantage of enhanced virtualization features offered by the IBM PowerVM Virtual I/O Server to create very flexible and resource scarce IBM i external storage solutions. Considering IBM s Statement of Directions and the PowerVM roadmap for VIOS and IBM i the benefits of using VIOS (see section 1.2) will more and more outweigh the additional limited administrative effort for implementing and managing a VIOS appliance which has been well documented for the IBM i environment in latest IBM Redbooks publications (see section 4, references 2 and 5). Besides the storage virtualization benefits provided by SVC like concurrent storage migrations, simplified central storage management and storage vendor agnostic Copy Services (see also section 1.2) SVC also delivered the best IBM i transaction performance among the tested configurations delivering on its merits of leading SPC-1 results and making it a recommended solution also for IBM i workload. http://www.ibm.com/support/techdocs/atsmastr.nsf/webindex/wp101435 Page 12/13
4 References 1) IBM i and IBM System Storage: A Guide to Implementing External Disk on IBM i, SG24-7120 http://www.redbooks.ibm.com/abstracts/sg247120.html?open 2) PowerVM Virtualization Managing and Monitoring, SG24-7590 http://www.redbooks.ibm.com/redpieces/pdfs/sg247590.pdf 3) IBM System Storage SAN Volume Controller, SG24-6423 http://www.redbooks.ibm.com/redbooks/pdfs/sg246423.pdf 4) SAN Volume Controller Best Practices and Performance Guidelines, SG24-7521 http://www.redbooks.ibm.com/redbooks/pdfs/sg247521.pdf 5) IBM i and Midrange External Storage, SG24-7668 http://www.redbooks.ibm.com/abstracts/sg247668.html?open 6) IBM System Storage DS5300 Performance Results in IBM i Power Systems Environment http://www-03.ibm.com/systems/resources/ds5300performance.pdf 7) IBM Power Systems Performance Capabilities Reference IBM i operating system Version 6.1, SC41-0607 http://www- 03.ibm.com/systems/resources/systems_i_advantages_perfmgmt_pdf_pcrm.pdf http://www.ibm.com/support/techdocs/atsmastr.nsf/webindex/wp101435 Page 13/13