Introduction...2. Executive summary...2. Test results...3 IOPs...3 Service demand...3 Throughput...4 Scalability...5

Similar documents
Table of Contents. HP A7173A PCI-X Dual Channel Ultra320 SCSI Host Bus Adapter. Performance Paper for HP PA-RISC Servers

HP EVA P6000 Storage performance

Microsoft Windows Server 2008 On Integrity Servers Overview

StorageWorks Dual Channel 4Gb, PCI-X-to-Fibre Channel Host Bus Adapter for Windows and Linux (Emulex LP11002)

Internal Cabling Guide for the HP Smart Array 6400 Series Controller on an HP Integrity Server rx7620

Retired. Windows Server 2008 R2 for Itanium-Based Systems offers the following high-end features and capabilities:

HP SAS benchmark performance tests

HP AlphaServer systems comparison chart for your mission-critical applications

HP D6000 Disk Enclosure Direct Connect Cabling Guide

QuickSpecs. PCIe Solid State Drives for HP Workstations

Introduction Optimizing applications with SAO: IO characteristics Servers: Microsoft Exchange... 5 Databases: Oracle RAC...

Sales Certifications

Assessing performance in HP LeftHand SANs

Key results at a glance:

Newest generation of HP ProLiant DL380 takes #1 position overall on Oracle E-Business Suite Small Model Benchmark

QuickSpecs. Models. Standard Features Server Support. HP Integrity PCI-e 2-port 10GbE Cu Adapter. HP Integrity PCI-e 2-port 10GbE LR Adapter.

QuickSpecs. Models 64-Bit/133-MHz Dual Channel Ultra320 SCSI host bus adapter B bit/133-MHz Dual Channel Ultra320 SCSI host bus adapter

HP ProLiant delivers #1 overall TPC-C price/performance result with the ML350 G6

QuickSpecs. Models 64-Bit/133-MHz Dual Channel Ultra320 SCSI host bus adapter B bit/133-MHz Dual Channel Ultra320 SCSI host bus adapter

QuickSpecs. HP Auto Port Aggregation. Overview

QuickSpecs. Useful Web Sites For additional information, see the following web sites: Linux Operating System. Overview. Retired

HP Dynamic Deduplication achieving a 50:1 ratio

ProLiant DL F100 Integrated Cluster Solutions and Non-Integrated Cluster Bundle Configurations. Configurations

HP Integrity rx2600 server

External Media Cards User Guide

HP Real User Monitor. Software Version: Real User Monitor Sizing Guide

Protect enterprise data, achieve long-term data retention

HP ProLiant DL580 G5. HP ProLiant BL680c G5. IBM p570 POWER6. Fujitsu Siemens PRIMERGY RX600 S4. Egenera BladeFrame PB400003R.

HPE WBEM Providers for OpenVMS Integrity servers Release Notes Version 2.2-5

Configuring RAID with HP Z Turbo Drives

QuickSpecs. Models. Key Features. Overview. Retired

The devices can be set up with RAID for additional performance and redundancy using software RAID. Models HP Z Turbo Drive Quad Pro 2x512GB PCIe SSD

HP StorageWorks Enterprise Virtual Array 4400 to 6400/8400 upgrade assessment

Business white paper HP and Bentley MicroStation V8i (SELECTseries3) Business white paper. HP and Bentley. MicroStationV8i (SELECTseries3)

HP ProLiant ML370 G4 Storage Server

HP AutoPass License Server

HP ProLiant BL35p Server Blade

HP 3PAR OS MU1 Patch 11

QuickSpecs. NC7771 PCI-X 1000T Gigabit Server Adapter. HP NC7771 PCI-X 1000T Gigabit Server Adapter. Overview

The HP Thunderbolt 2 PCIe Card is backward compatible to allow your Thunderbolt devices to operate with no cable adapters needed.

Performance report for Microsoft Office Communications Server Consolidated Configuration on HP BladeSystem

Migrating from E/X5600 Processors

HP on CUTTING EDGE with ProLiant BL460c G6 server blade

QuickSpecs. Models HP SC11Xe Host Bus Adapter B21. HP SC11Xe Host Bus Adapter. Overview

HP BladeSystem c-class Ethernet network adaptors

QuickSpecs. Models SATA RAID Controller HP 6-Port SATA RAID Controller B21. HP 6-Port SATA RAID Controller. Overview.

HP Integrity rx Server

HP NC7771 PCI-X 1000T

QuickSpecs. Models. HP StorageWorks 8Gb PCIe FC HBAs Overview

QuickSpecs. Models HP I/O Accelerator Options. HP PCIe IO Accelerators for ProLiant Servers. Overview

QuickSpecs. HP StorageWorks ds2110. HP StorageWorks ds2110. Description HP StorageWorks Disk System 2110 (ds2110)

Enabling High Availability for SOA Manager

Models HP NC320T PCI Express Gigabit Server Adapter B21

Performance of Mellanox ConnectX Adapter on Multi-core Architectures Using InfiniBand. Abstract

HP Data Protector Media Operations 6.11

QuickSpecs. Integrated NC7782 Gigabit Dual Port PCI-X LOM. Overview

External Media Cards. User Guide

QuickSpecs. PCIe Solid State Drives for HP Workstations

10Gb iscsi Initiators

HP 3PAR OS MU3 Patch 18 Release Notes

External Media Cards User Guide

HPE Enhanced Network Installation and Startup Service for HPE BladeSystem

FAQs HP Z Turbo Drive Quad Pro

HP BladeSystem c-class Ethernet network adapters

Installing Windows Vista TM Business on HP Compaq Business 4400, 6300, 7300, 7400 notebook models

HP Web Jetadmin 8.0 Credential Store Feature

QLIKVIEW SCALABILITY BENCHMARK WHITE PAPER

Emulex OCe11102-N and Mellanox ConnectX-3 EN on Windows Server 2008 R2

ProLiant Cluster HA/F500 for Enterprise Virtual Array Introduction Software and Hardware Pre-Checks Gathering Information...

Boosting Server-to-Server Gigabit Throughput with Jumbo Frames

HPE Datacenter Care for SAP and SAP HANA Datacenter Care Addendum

QuickSpecs. HP Integrity Virtual Machines (Integrity VM) Overview. Currently shipping versions:

QuickSpecs. Key Features and Benefits. HP C-Series MDS 9000 Storage Media Encryption (SME) Software. Overview. Retired

External Devices. User Guide

Fully integrated and tested with most ProLiant servers and management software. See list of servers with each adapter specifications.

Configuring the HP StorageWorks Modular Smart Array 1000 and 1500cs for external boot with Novell NetWare New Installations

Retired. HP NC7771 PCI-X 1000T Gigabit Server Adapter Overview

W H I T E P A P E R. Comparison of Storage Protocol Performance in VMware vsphere 4

HP 3PAR OS MU3 Patch 17

HP Operations Orchestration

HP ProCurve Manager Plus 3.0

QuickSpecs. HP Z 10GbE Dual Port Module. Models

QuickSpecs. Models. HP NC380T PCI Express Dual Port Multifunction Gigabit Server Adapter. Overview

External Devices User Guide

Models PDC/O5000 9i W2K Cluster Kit B24

SAP SD Benchmark with DB2 and Red Hat Enterprise Linux 5 on IBM System x3850 M2

Overview of HP tiered solutions program for Microsoft Exchange Server 2010

HP 3PAR OS Messages and Operators Guide

QuickSpecs. AMD FirePro W5100 4GB Graphics INTRODUCTION PERFORMANCE AND FEATURES. AMD FirePro W5100 4GB Graphics. Overview

QuickSpecs. HP Integrity Virtual Machines (Integrity VM) Overview. Currently shipping versions:

Models Part Number Brocade HP 41B 4Gb PCIe FC HBA AP767B HP Single Channel 4Gb PCIe Fibre Channel Host Bus Adapter

IBM System Storage LTO Ultrium 6 Tape Drive Performance White Paper

Over 70% of servers within a data center are not connected to Fibre Channel SANs for any of the following reasons:

HP StorageWorks Booting Windows Server 2003 for Itanium-based systems from a storage area network application notes

HPE 3PAR OS MU3 Patch 24 Release Notes

The HP Integrity server family. Industry-leading systems based on Intel Itanium 2 processors: enabling the Adaptive Enterprise

Models Compaq NC3123 Fast Ethernet Adapter PCI, 10/100 WOL B21 Single B22 5-Pack

Smart Array Controller technology: drive array expansion and extension technology brief

External Devices User Guide

HP StorageWorks Single Channel 4Gb PCIe Fibre Channel Host Bus Adapter

Transcription:

A6826A PCI-X Dual Channel 2Gb/s Fibre Channel Adapter Performance Paper for Integrity Servers Table of contents Introduction...2 Executive summary...2 Test results...3 IOPs...3 Service demand...3 Throughput...4 Scalability...5 System configuration guidelines...7 Test details...8 Additional information...9 1

Introduction This paper provides basic I/O performance and scalability information for the A6826A dual-channel 2Gb Fibre Channel adapter on HP s Itanium-based Integrity servers. A series of tests were conducted to evaluate the adapter s performance on rx26, rx464, rx762, rx862 and HP Integrity Superdome servers. The tests measured the IOPs and throughput of single and multiple adapters. This paper addresses the performance capabilities of the A6826A adapter when used with the above mentioned HP Integrity servers. This paper also discusses system setup considerations necessary to obtain the maximum possible performance from A6826A adapters installed in these servers. This paper focuses on the following topics: Test results: Single and multiple adapter performance data, including IOPs and throughput for a single port and dual ports, will be discussed. Scalability: Multiple adapter scalability tests on rx26, rx464, rx762, rx862 and Integrity Superdome systems will be discussed. System configuration guidelines: rx26, rx464, rx762, rx862 and Integrity Superdome system configurations and recommendations will be discussed. Test details: The HP products used, the test setup, and the benchmark tool used, will be discussed. Executive summary A6826A adapters installed in HP Itanium-based Integrity servers offer an excellent SAN solution. The A6826A adapter provides read throughput of 195MB/s, and write throughput of 186MB/s on a single port running at 2Gb/s. With both ports running at 2Gb/s, a read throughput of 39MB/s, and a write throughput of 372MB/s is achieved. A6826A performance summary single port dual ports Operation read write bd* read write bd* IOPs 3518 2619 26187 732 41221 52414 achieved 195 186 37 39 372 72 throughput % of 2Gb/s FC (MB/s) 98 93 92.5 97.5 93 9 theoretical max NOTE: *bd: Bidirectional Operation The above performance data was obtained by installing an A6826A adapter in a PCI-X 133MHz slot (Slot 8) of an rx464 server. An A6826A adapter installed in a PCI-X slot of an rx464 server offers excellent bi-directional performance. A single A6826A port running at 2Gb/s provides bi-directional throughout of 37MB/s and both A6826A ports running at 2Gb/s provide bi-directional throughout of 72MB/s with linear scaling. On other Integrity servers the A6826A adapter provides outstanding performance with linear scaling for up to 4 adapters when all eight ports are being used at 2Gb/s bandwidth. Additional adapters may be added to provide greater connectivity. 2

Test results HP obtained the performance data presented in this paper using the diskbench (db) benchmarking utility. Various block size tests were executed for read and write on single and dual ports. The IOPs metric was obtained with 1 KB block size transfers. The throughput metric was obtained with 128 KB block size transfers. The throughput metric is useful in modeling large sequential transfers, for example, remote backup. The IOPs metric is useful in modeling small transactional traffic. IOPs Chart 1a: IOPS Chart 1b: % CPU Utilization 8 8 IOPs 6 4 2 % CPU Utilization 6 4 2 1 Port 2 Ports Read 3518 732 Write 2619 41221 1 Port 2 Ports Read 39 73 Write 27 46 Chart 1a shows the number of I/O operations per second for read and write operations on a single port and on both ports of an A6826A adapter measured on a 4 way rx464 server. The X-axis represents the number of Fibre Channel ports and the Y-axis represents the number of I/O operations per second. Chart 1a shows linear scaling of IOPs for one and two ports. The number of read I/O operations per second for a single port and for dual ports is 3518 and 732 respectively. The number of write operations per second for a single port and for dual ports is 2619 and 41221 respectively. The IOPs metric is limited by the processor used on the A6826A adapter. Chart 1b shows the %CPU utilization for I/O operation tests. The X-axis represents the number of Fibre Channel ports. The Y-axis represents the % CPU utilization. To accurately record CPU utilization, only one CPU was configured in these tests. Chart 1b demonstrates excellent CPU utilization numbers for small size I/Os on both ports of an A6826A adapter. Service demand Service Demand is a measure of the CPU time needed to handle 1 KB of I/O. The Service Demand number for an adapter is a measure of how much load the adapter places on the CPU(s) for each KB/s of throughput. The A6826A adapter offers low CPU service demand for small I/Os. To illustrate the service demand for the A6826A adapter, small size operations tests were conducted using db. Sequential Read and Write operations of 4KB and 8KB I/O sizes were performed on an rx464 server with one CPU configured. The following table shows single and dual port throughput and CPU utilization for raw Sequential Read and Sequential Write operations with 4KB and 8KB I/O sizes: Number Sequential Read Sequential Write 3

of Ports 4KB 8KB 4KB 8KB Tput 1 CPU 2 SD 3 Tput 1 CPU 2 SD 3 Tput 1 CPU 2 SD 3 Tput 1 CPU 2 SD 3 Single 135 39% 2.8 189 3% 1.5 79 26% 3.2 111 21% 1.8 Dual 272 72% 2.6 379 55% 1.4 157 45% 2.8 223 36% 1.6 NOTE: 1 Throughput in MB/s 2 Single CPU utilization numbers 3 Service Demand in microseconds calculated using ((%CPU utilization/1)/ Throughput in KB/s)(1^6) Data gathered on an rx464server configured with a single CPU Throughput Chart 2a: Single and Dual Port Throughput Chart 2b: CPU Utilization for Throughput Tests Throughput (MB/s) 5 4 3 2 1 2 1 Port 2 Ports Theoretical Max. 2 4 Read 195 39 Write 186 372 Data gathered on an rx464 server configured with a single CPU % CPU Utilization 16 14 12 1 8 6 4 1 Port 2 Ports Read 1.9 13.7 Write 1.8 13.5 Chart 2a shows the 128KB raw sequential read and write throughput of single and dual ports on an A6826A adapter. The X-axis represents the number of Fibre Channel ports. The Y-axis represents the throughput in MB/s. Chart 2a shows one A6828A port performing read operations at 195MB/s. Two A6826A ports scale to 2x the performance of a single A6826A port, performing read operations at up to 39MB/s. For write operations, one A6826A port performs at 186MB/s. Two A6826A ports scale to 2x the performance of a single A6826A port, performing writes at up to 372MB/s. Chart 2a demonstrates outstanding read and write performance with linear scaling for 1 and 2 ports on the A6826A adapter. Chart 2b shows the % CPU utilization in throughput tests. The X-axis represents the number of Fibre Channel ports. The Y-axis represents the % CPU utilization. To accurately record the CPU utilization, only one CPU is configured in these tests. Chart 2b shows very small percentages of CPU utilization for one or two A6826A ports in read and write throughput tests. 4

Scalability A series of tests were conducted to evaluate the performance scalability of the A6826A adapter on rx26, rx464, rx762, rx862 and HP Integrity Superdome servers. 16 14 Chart 3a: 128KB Sequential Read Throughput Throughput (MB/s) 12 1 8 6 4 2 1 2 3 4 5 6 7 8 rx26 191 38 572 76 952 1142 1332 1518 rx464 191 382 573 763 953 1143 1331 1466 rx762 191 382 574 764 947 1133 1313 156 rx862 191 383 574 764 954 1143 1333 1522 Integrity-SD 191 382 572 763 953 1142 1333 1523 Number of Ports rx26 with two 9MHz Itanium 2 processors and 8GB Physical memory rx464 with four 1.3GHz Itanium 2 processors and 8GB Physical memory rx762, 2 cell partition, each cell with four 1.3 GHz Itanium 2 processors and 8GB Physical memory rx862, 2 cell partition, each cell with four 1.3GHz Itanium 2 processors and 8GB Physical memory Integrity Superdome, 2 cell partition with eight 1.5GHz Itanium 2 processors and 8GB physical memory Chart 3a shows the read throughput performance of 1, 2, 3 and 4 A6826A adapters installed in rx26, rx464, rx762, rx862, and HP Integrity Superdome servers. The X-axis represents the number of ports. The Y- axis represents the throughput performance in MB/s. Chart 3a shows excellent read throughput performance with linear scaling for 4 A6826As installed in rx26, rx464, rx762, rx862 and HP Integrity Superdome servers, reaching an aggregate throughput of 1523MB/s. A6826A adapters installed in HP Integrity servers offer an excellent scalable SAN solution. 5

16 Chart 3b: 128 KB Sequential Write Throughput 14 12 Throughput (MB/s) 1 8 6 4 2 1 2 3 4 5 6 7 8 rx26 186 372 557 739 928 1111 1295 148 rx464 188 376 564 752 938 1124 137 1486 rx762 186 372 558 742 921 113 1287 1468 rx862 186 372 558 722 98 194 1278 1461 Integrity-SD 188 375 56 751 938 1126 1313 1496 Number of Ports rx26 with two 9MHz Itanium 2 processors and 8GB Physical memory rx464 with four 1.3GHz Itanium 2 processors and 8GB Physical memory rx762, 2 cell partition, each cell with four 1.3 GHz Itanium 2 processors and 8GB Physical memory rx862, 2 cell partition, each cell with four 1.3GHz Itanium 2 processors and 8GB Physical memory Integrity Superdome, 2 cell partition with eight 1.5GHz Itanium 2 processors and 8GB physical memory Chart 3b shows the write throughput performance of 1, 2, 3 and 4 A6826A adapters installed in rx26, rx464, rx762, rx862 and HP Integrity Superdome servers. The X-axis represents the number of ports. The Y- axis represents the throughput performance in MB/s. Chart 3b shows excellent write throughput performance with linear scaling for 4 A6826A adapters installed in rx26, rx464, rx762, rx862 and HP Integrity Superdome servers, reaching an aggregate throughput of 1496 MB/s. The table below summarizes the maximum number of cards that can be installed in the tested servers while maintaining linear scalability. Additional adapters may be installed to provide better connectivity. Maximum number of A6826As for linear scalability Ports operating rx26 rx464 rx762 rx862 Integrity speed Superdome Both Ports at 2Gb/s 4 4 2 per I/O 2 per I/O 2 per I/O One port at 1 Gb/s 4 4 3 per I/O 3 per I/O 3 per I/O and other at 2Gb/s Both ports at 1 Gb/s 4 6 4 per I/O 4 per I/O 4 per I/O 6

System configuration guidelines NOTE: Some PCI-X 133 MHZ slots on Integrity Servers have higher bandwidth than other PCI-X 133 MHZ slots on the server. In this whitepaper, we will refer to these higher bandwidth PCI-X 133 MHZ slots as high performance PCI-X 133 MHZ slots. rx26 The rx26 is a 2-way entry-level Integrity server with 4 PCI-X 133MHz slots. A6826A adapters installed in an rx26 server offer a great scalable solution. rx464 The rx464 is a 4 way entry level Integrity server with eight I/O slots. Slots 1 and 2 are reserved for core SCSI and core LAN. Slots 3, 4, 5, and 6 are shared slots operating at 3.3v, 64-bit PCI-66MHz. Slots 7 and 8 are high performance PCI-X 133MHz slots. In order to achieve linear scalability with up to four A6826A adapters, HP recommends the first two A6826As be installed in slot 7 and slot 8, the next two A6826As should be installed in the shared slots, but avoid installing A6826A adapters in adjacent shared slots. rx762 / rx862 The rx762 and rx862 are cell based mid-range Integrity servers. The I/O subsystem in the rx762 and rx862 offers 16 PCI-X 133MHz slots when these servers are configured with 2 cells. Fourteen of the sixteen slots are high performance PCI-X 133MHz slots. HP recommends A6826A adapters be installed in high performance PCI-X 133MHz slots (Physical slot numbers 1, 2, 3, 4, 5, 6, 7) to achieve the performance demonstrated in this paper. To obtain linear scalability, HP recommends the rx762 or rx862 be configured with a 2-cell partition: Two A6826As should be installed in any of the PCI-X slots numbered 1 to 7 and a maximum of two A6826As should be installed in any of the PCI-X slots numbered 9 to 15. Each cell should be configured with a multiple of 8 equal capacity DIMMs to take advantage of memory interleaving. The DIMMs on a cell should be evenly distributed across the busses. HP Integrity Superdome The HP Integrity Superdome is a cell based high-end Integrity server. Each Superdome I/O subsystem offers 4 high performance PCI-X slots capable of delivering 166MB/s peak bandwidth. HP recommends A6826A adapters be installed in high performance PCI-X slots (Physical slot numbers 4, 5, 6, 7) to achieve the performance demonstrated in this paper. To obtain the linear scalability demonstrated in this paper, an Integrity Superdome partition should be configured with at least two cells with 8 Gb of physical memory and not more than two A6826As should be installed in any of the four high performance slots in each I/O cage. Each cell should be configured with a multiple of 8 equal capacity DIMMs to take advantage of memory interleaving. The DIMMs on a cell should be evenly distributed across the busses. 7

Test details The performance results presented in this paper were obtained with A6826A adapters installed in various HP Integrity servers. The system configurations used in the test setup are detailed in the following table: Products used in testing products used for performance measurement test rx26 Configured with Two 9MHz Intel Itanium 2 4GB System Memory Servers Tested Benchmark software db rx464 Configured with Four 1.3GHz Intel Itanium 2 8 GB System Memory rx762 2 Cell partition, each cell with Four 1.3GHz Intel Itanium 2 8GB System Memory rx862 2 Cell partition, each cell with Four 1.3GHz Intel Itanium 2 8GB System Memory Integrity Superdome 2 Cell partition, each cell with Four1.5GHz Intel Itanium 2 8GB System Memory A6826A HBA PCI-X dual channel 2Gb Fibre Channel adapter Each port capable of independently operating at 1 or 2 Gb/s (Auto- Negotiation) 33/66/1/133MHz-64bit capable Diskbench (db) is the benchmark suite that generated disk read and write traffic for these tests. 8

Additional information For more information about the A6826A adapter please visit: http://www.hp.com/products1/serverconnectivity/storagesnf2/index.html. For more information about HP s Itanium-based Integrity servers please visit: http://www.hp.com/products1/servers/integrity/index.html 25 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Intel and Itanium are trademarks or registered trademarks of Intel Corporation in the U.S. and other countries and are used under license. UNIX is a registered trademark of The Open Group. 2/25 9