HP StorageWorks Enterprise File Services Clustered Gateway performance and scalability with HP StorageWorks XP12000 Disk Array white paper
|
|
- Brandon Roderick Owen
- 5 years ago
- Views:
Transcription
1 StorageWorks Enterprise File Services Clustered Gateway performance and scalability with StorageWorks XP Disk rray white paper Executive summary... Test environment... StorageWorks XP Disk rray... StorageWorks EFS Clustered Gateway... File services clients... Network... Test approach... Tools... Procedures... 6 Test results... 8 For more information...
2 Executive summary High file system performance is often a requirement for the purchase of a storage solution, and performance scalability is a key differentiator for the StorageWorks Enterprise File Services (EFS) Clustered Gateway product. While a large percentage of the market for this product uses middle tier storage, such as the StorageWorks Enterprise Virtual rrays (EV), enterprises requiring fast storage often purchase higher end disk arrays, such as those in the StorageWorks XP family. Those customers who want high-performance storage and file services will likely purchase an XP disk array and two or more nodes of the Clustered Gateway product. This white paper describes a performance testing environment, testing approach, tuning considerations, and test results used to demonstrate the performance scalability of file services using multiple Clustered Gateway nodes, an StorageWorks XP Disk rray, and the required network infrastructure. Test environment The following figure shows the network and storage area network (SN) environment used for this performance testing. Each Clustered Gateway node used three GigE interfaces: one for the private cluster network (used for heartbeat, monitors, and DLM traffic) and two for physically separate networks (PERF_ and PERF_). These networks were configured as VLNs at the Ethernet switch. Note that the NFS clients were heterogeneous, both in terms of hardware platform and operating system (OS). While this is not optimal for generating the best performance results, it is typical that a variety of systems would be accessing storage through the Clustered Gateway product.
3 Figure. XP Disk rray test environment EFS Clustered Gateway/XP Performance Testing Environment perf DL8 G SLES perf DL8 DL8 G SLES 9 CGW.., patch FC Fabric R EMOVLE SPR EMIRROR ED RID OR D G priv DL8 REMOVLE SPREMIRROREDRID ORD G h p P r o Lia n t D L 8 DL8 R EMOVLE SPR EMIRROR ED RID OR D G h p P r o L ia n t D L 8 DL8 REMOVLE SPREMIRROREDRID ORD G h p P r o L ia n t D L 8 DL8 REMOVLE SPREMIRROREDRID ORD G h p P r o L ia n t D L 8 DL8 REMOVLE SPREMIRROREDRID ORD G h p P r o L ia n t D L 8 DL8 REMOVLE SPREMIRROREDRID ORD G h p P r o L ia n t D L 8 DL8 REMOVLE SPREMIRROREDRID ORD G h p P r o L ia n t D L 8 DL8 REMOVLE SPREMIRROREDRID ORD G h p P r o L ia n t D L 8 DL8 REMOVLE SPREMIRROREDRID ORD G DL8 G SLES T a pe ch c h hp Pr ol iant D L8 G D uplex Simplex U ID Ta pe ch ch hp Pr ol iant D L8 G D up lex Simp lex U ID Ta pe ch c h hp Pr ol iant D L8 G D uplex S t o r a g e W o r k s x p d i s k a r r a y h p S t o r a g e W o r k s x p d i s k a r r a h p y S t o r a g e W o r k s x p d i s k a r r a y h p S t o r a g e W o r k s x p d i s k a r r a Simplex U ID h p y RE D Y L RM MESS G E Ta pe ch c h hp Pr ol iant D L8 G U ID d i s k a r r a y D uplex x p o r k s S t o r a g e W h p Simplex U S H P Ta pe ch ch hp Pr ol iant D L8 G XP D uplex Simplex U ID DL SLES CGW Node Configuration: - Dual core.6 GHz Opteron procs. - 8 G Memory - GigE ports: internal, PCI-X - Gb/s FC ports (Qlogic) XP Configuration: - 6 CHIP P-P ports ( Gb/s) G LDEVs (D+D) - LDEVs presented to each host port - 8 paths for each LDEV per node Lp lades RHEL, Upd. (7 blades used) hp L6p hp L6p hp L6p hp L6p hp L6p hp L6p hp L6p
4 StorageWorks XP Disk rray The following figure shows the disk array and controller configuration used for the performance testing. For performance optimization, disk groups managed by a particular Disk Control frame (DKC) are spread across the clusters and striped vertically. The LDEVs created for these tests are all RID (D+D). Higher RID levels (for example, RID ) would have introduced unwanted parity calculation overhead. Each LDEV is.7 G and presented to each of the eight Clustered Gateway nodes, utilizing a total of 6 -Gb/s Fibre Channel P-P CHIP interfaces on the DKC. total of 6 LDEVs were created and presented across the 6 CHIP interfaces to the eight nodes. Figure. XP Disk rray/dkc configuration StorageWorks EFS Clustered Gateway Due to the need to show scalability, the various performance tests involved between one and eight nodes. The testing included the following node hardware/software specs: ProLiant DL8 G Server two dual-core.6-ghz Opteron processors 8-G RM x GigE ports: two internal, two used on a quad port PCI-X NIC x -Gb/s FC ports (QLogic) SLES 9, EFS Clustered Gateway.., patch Each Clustered Gateway node was presented with all 6 of the XP LUNs used in this testing, as required by the Clustered Gateway software. These presentations were specifically made to ensure that each server used two CHIP ports and no two servers used the same CHIP ports. Furthermore, the LUNs presented through each CHIP port was presented to different H ports on the server, creating a one-to-one relationship between the CHIP port and server H port, with LUNs presented per CHIP port/h port combination. These LUNs were imported into the cluster, and PolyServe Dynamic Volumes were created, striping across two LUNs each (with a 6-K stripe size). We verified that the two LUNs comprising a volume had different paths (presented on different CHIP ports) in all cases, thus ensuring that any activity to any Dynamic Volume would utilize both H ports on
5 the Clustered Gateway server performing the activity, and both CHIP ports accessed by that server. Upon volume creation, each volume had a PSFS file system created upon it, and each server mounted four of the PSFS file systems. The file systems were then added to a single Export Group, and associated with a Virtual NFS Service (one Virtual NFS Service was created for each physical interface on each server attached to the PERF_ and PERF_ networks). ll exports were created with minimal security considerations and for maximum performance (exported to the world, no_root_squash, nohide, no_wdelay, async, and so on). File services clients The heterogeneous NFS client environment for this testing included the following sets of servers: x ProLiant DL8 G Server (SLES ) x ProLiant DL8 G Server (SLES) x ProLiant DL Server (SLES ) 7x ProLiant Lp lades (RHEL m, Update ) These servers were used predominantly as NFS clients for the client-side testing. Each client node mounted four file systems exported from the Clustered Gateway cluster, with client pairs mounting the four file systems served from a single Clustered Gateway server across both the PERF_ and PERF_ networks, and automated iozone tests were run simultaneously and remotely from one of the clients. Network ecause of the high port count requirements for this test environment, two Cluster Platform ProCurve 6zl switches were used, with all four -Gb/s optical trunks used as an uplink. Two separate VLNs were used (PERF_ and PERF_), and these VLNs were tagged at the uplink. Test approach Tools The network testing was performed with netperf. This is a network test tool that performs a number of different tests; the only one used in this case was the TCP stream test. This is an approximation of the NFS traffic that would be generated by the clients. For netperf to operate correctly, a server must be running on the remote node to receive the traffic generated during the test. The test is unidirectional, and was performed from the client to the server, emulating NFS write traffic only. Command line: Clustered Gateway Server: netserver (no options) Client: netperf -fm -H${hostname} Options: -fm Specifies that out should be portrayed in megabytes, specifically power of two rather than power of notation (that is, divided by,8,76 as opposed to,,). -H${hostname} Specifies the target for the test. In this testing the clients were distributed evenly between the servers. For more details, see the Procedures section. URL:
6 The tool used for performing both local and NFS testing was iozone. This is a file system performance test tool, again with many modes of operation. For this testing, the iozone tool was only used to perform large block size streaming writes and reads through the use of the following command line, with the following options explained. Command line: Write: biozone -i -e -+n -s m -r m -t ${run_threads} -w -+m./machinefile Read: iozone -i -e -+n -s m -r m -t ${run_threads} -w -+m./machinefile Options: -i/-i Controls the type of test is a write test, is a read test. -e Specifies that the data flush (fsync/fflush) should be included in the timing calculations. This ensures that data being written from clients is not considered written until it is flushed from the client buffer cache. -+n Chooses not to run re-test operations. This avoids re-writing or re-reading the files after the pertinent write or read operation. This creates a more efficient use of test time. -s m Is the size of each individual file used in the test. Each thread will write or read a single file of this size. This is set to M for this testing. -r m Is the record size. Each individual write or read request generated by the test program will be of this size. This is set to M for this testing. -t ${run_threads} Is the number of threads that are used in the test run. This value was variable throughout the testing performed, and was set to eight threads per client used. -w Specifies that the file used in testing should not be deleted after the test is finished. This ensures that after a write test, the file created is available for reading. fter the read test, the files created were deleted manually. For more details, see the Procedures section. -+m./machinefile Specifies the file containing the remote execution details. This is used by iozone to distribute process across multiple remote systems. For more details, see the Procedures section. URL: Procedures The intention of this testing was to show the scalability of the Clustered Gateway solution when backed by an XP Disk rray. n initial set of tests were performed against a single server to determine the optimum client load for maximum single server throughput. When this was determined, the scalability testing was performed using the single server test case as a building block for each additional server tested. t each stage of the testing, the network was first verified by using the netperf tool, and after any discovered issues were fixed, the iozone tool was used to determine the throughput figures, as detailed in the Test results section. s previously stated, the initial phase of testing was to size the initial building block that would be the basis of the full set of scalability tests. To do this, a single server was tested with a variety of variables to determine optimum throughput with the available hardware. The key variables tested were: Number of NFS client nodes accessing the PSFS volumes on the server Number of threads running on each client node used in the test Number of PSFS volumes accessed on the server by each test 6
7 full matrix of these variables was constructed, and all of the tests were run through an automated script to ascertain the varying throughput figures for each combination in the matrix, as shown in the following table. Table. Performance test results (NFS clients) M/s Scaling factor Scaling coefficient # servers Write Read Write Read Write Read % % % % % % % % % % % 99% % 99% % 98% From the figures obtained, a client load of two clients running 6 threads accessing four PSFS volumes on each Clustered Gateway server was chosen. Now the scalability tests could begin. efore running the throughput tests, a network throughput and scalability test was run to determine that the network hardware was capable of full throughput to the Clustered Gateway servers. This testing showed deficiencies with the network due to poor network cables. Through analysis of the network switch port error counters, the bad network cables were discovered and replaced. further round of network testing verified these cables and allowed the throughput testing to begin. gain, an automated script was created to ensure that the exact tests performed could be replicated easily. Other automated scripts were also generated to handle the mounting of the file systems on the client nodes and test setup/tidy-up procedures, again with a view to ensuring that all of the results obtained could be easily repeatable. The key to testing successfully was to ensure that all of the clients had mounted file systems correctly and in an ordered way, and that each iozone thread on each client was accessing the appropriate file system. Furthermore, each test thread was to use a file in a unique directory, to reduce the effects of multi-thread access to the PSFS volumes. lso, to ensure that the clients did not access their local cache when reading the file, each client would read the files created by his or her partner client. fter the automated test script was written, and verified to produce the required behavior, it was executed to obtain the results detailed in the Test results section, as shown in the following figure. 7
8 Test results Figure. Clustered Gateway servers Clustered Gateway Single Server I/O Throughput via NFS Megabytes per second (M/s) Data verage of write verage of read # clients / # PSFS volumes under test per client / # threads per client nodes luns threads The preceding figures show the throughput figures from the initial testing performed for determining the optimum load to use for the full multi-client multi-server NFS testing. The figures clearly show that as the load increases by the Clustered Gateway server using more clients and more client-side I/O threads, the throughput increased. This would be an expected behavior, as would the rough plateau shown when the server is close to link speed. The data shows that the ideal test load would have been four clients accessing all four file systems on a single Clustered Gateway server, but due to hardware limitations within the test environment, two clients each running 6 threads was the maximum that could be used for the full-scale testing. However, there were a number of problems with the test tool, particularly with scaling. During the full scalability testing, it was discovered that at larger client counts using 6 threads per client would cause a failure in the test tool. Due to this finding, the full-scale testing was performed with eight threads per client, as opposed to 6. lso, attempts to scale the test tool used (iozone) above 6 threads per client, even when using just a single client, were unsuccessful, hence the cut-off point of 6 threads per client in the preceding results. 8
9 Table. Performance test results (NFS clients) M/s Scaling factor Scaling coefficient # servers Write Read Write Read Write Read % % % % % % % % % % % 99% % 99% % 98% Figure. Read/write scalability graph luns Clustered Gateway I/O Throughput (via NFS) Megabytes per second (M/s) Data 98 Cluster Size (Servers) verage of read verage of write servers The preceding figure and following table, show very linear performance scalability as nodes are added to the cluster. In addition, the high scaling coefficient for both reads and writes indicate that, although the scaling coefficient starts to drop off, there should be reasonable scalability, up to a 6- node cluster. This drop can be attributed to the heterogeneous client mix and lack of clients available for performing testing with. It is expected that with a larger number of clients, with care and consideration given to their OS and specification, that the scaling up to 6 Clustered Gateway servers would be linear with a single XP Disk rray. 9
10 More importantly, these numbers confirmed that with this performance configuration, we could comfortably achieve close to M/s, per node. This level of performance caused approximately 6% CPU utilization of the Clustered Gateway CPUs, suggesting that the bottleneck was due to the available throughput of the TCP network interfaces utilised for the testing. This is proven by the initial series of iozone tests (writes only) performed to the local file systems on the Clustered Gateway servers, as shown in the following figure. Table. Performance test results (local file system) # servers M/s Delta Scaling factor Scaling coefficient % % % % % % % % Figure. Local node write performance scalability type cumulative luns Clustered Gateway I/O Throughput (Local PSFS) Megabytes per second (M/s) Data 9.9 Servers verage of write Max of delta nodes
11 The dramatically better per-node performance in this local test underscores the fact that the network is the bottleneck, when working with NFS, as would be expected. These results show a total throughput of approximately.6 Gb/s to the XP Disk rray. The specification for the array states that the large sequential IO throughput of the array is greater than 9 Gb/s. This would suggest that even a fully configured Clustered Gateway system of 6 nodes would not saturate the XP Disk rray with the configuration used for this testing. The poor scaling shown is due to individual test run fluctuations of up to % on each server. These effects can be negated in future testing by ensuring to run for a longer period of time by using larger test files, and repeating the tests multiple times and taking the average over a minimum of five runs. Similarly, it is expected that this would level the delta between successive runs to a nearly constant value. Finally, it should be noted that none of these tests saturated the XP Disk rray. Further test design is recommended to isolate the point where the array is saturated and performance scalability flattens. This kind of testing is likely to require more Clustered Gateway nodes, and should be performed locally on the nodes, taking client network inefficiency out of the equation.
12 For more information 7 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. shall not be liable for technical or editorial errors or omissions contained herein. -7ENW, ugust 7
StorageWorks Dual Channel 4Gb, PCI-X-to-Fibre Channel Host Bus Adapter for Windows and Linux (Emulex LP11002)
Models FC2143 4Gb PCI-X 2.0 HBA FC2243 Dual 4Gb PCI-X 2.0 HBA FC1143 4Gb PCI-X 2.0 HBA FC1243 Dual 4Gb PCI-X 2.0 HBA StorageWorks 4Gb, PCI-X-to-Fibre Host Bus Adapter for Windows and Linux. (Emulex LP1150)
More informationModels PDC/O5000 9i W2K Cluster Kit B24
Overview Models PDC/O5000 9i W2K Cluster Kit 252478-B24 Introduction The HP Parallel Database Clusters (PDC) for Windows are multi-node shared storage clusters, specifically designed, tested and optimized
More informationIntroduction...2. Executive summary...2. Test results...3 IOPs...3 Service demand...3 Throughput...4 Scalability...5
A6826A PCI-X Dual Channel 2Gb/s Fibre Channel Adapter Performance Paper for Integrity Servers Table of contents Introduction...2 Executive summary...2 Test results...3 IOPs...3 Service demand...3 Throughput...4
More informationHP Converged Network Switches and Adapters. HP StorageWorks 2408 Converged Network Switch
HP Converged Network Switches and Adapters Family Data sheet Realise the advantages of Converged Infrastructure with HP Converged Network Switches and Adapters Data centres are increasingly being filled
More informationHP Serviceguard Solutions Storage Support Matrix (HPUX) Jan 13, 2015, Ver 02.60
HP Serviceguard Solutions Storage Support Matrix (HPUX) Jan 13, 2015, Ver 02.60 TE: Please refer to http://www.hp.com/softwarereleases/releases-media2/ for details on Product Support Lifecycle information.
More informationIntroduction Optimizing applications with SAO: IO characteristics Servers: Microsoft Exchange... 5 Databases: Oracle RAC...
HP StorageWorks P2000 G3 FC MSA Dual Controller Virtualization SAN Starter Kit Protecting Critical Applications with Server Application Optimization (SAO) Technical white paper Table of contents Introduction...
More informationOverview of HP tiered solutions program for Microsoft Exchange Server 2010
Overview of HP tiered solutions program for Microsoft Exchange Server 2010 Table of contents Executive summary... 2 Introduction... 3 Exchange 2010 changes that impact tiered solutions... 3 Hardware platforms...
More informationHPE Serviceguard Solutions Storage Support Matrix (HPUX) Oct 25, 2016 Version 3.0
HPE Serviceguard Solutions Storage Support Matrix (HPUX) Oct 25, 2016 Version 3.0 1 HPE Serviceguard Solutions Storage Support Matrix The tables on the following pages contain information about storage
More informationHP on CUTTING EDGE with ProLiant BL460c G6 server blade
HP on CUTTING EDGE with ProLiant BL460c G6 server blade HP ProLiant BL460c G6 achieves #1 Windows two-processor result on two-tier SAP Sales and Distribution Standard Application Benchmark with SAP enhancement
More informationHP BladeSystem c-class Ethernet network adaptors
HP BladeSystem c-class Ethernet network adaptors Family data sheet NC325m Quad-port Gigabit NC326m Dual-port Gigabit NC360m Dual-port Gigabit NC364m Quad-port Gigabit NC382m Dual-port Multifunction Gigabit
More informationQuickSpecs. HP Auto Port Aggregation. Overview
HP's Auto Port Aggregation (APA) provides the ability to logically group two or more physical network ports into a single Fat Pipe, often called a trunk. Network traffic is load balanced across all of
More informationNewest generation of HP ProLiant DL380 takes #1 position overall on Oracle E-Business Suite Small Model Benchmark
Newest generation of HP ProLiant DL380 takes #1 position overall on Oracle E-Business Suite Small Model Benchmark ProLiant DL380 G6 uses latest Intel Xeon X5570 technology for ultimate performance HP Leadership
More informationProLiant DL F100 Integrated Cluster Solutions and Non-Integrated Cluster Bundle Configurations. Configurations
Overview ProLiant DL F100 Integrated Cluster Solutions and Non-Integrated Cluster Bundle Configurations 1. MSA1000 6. Fibre Channel Interconnect #1 and #2 2. Smart Array Controller 7. Ethernet "HeartBeat"
More informationRetired. For more information on HP's ProLiant Security Server visit:
Overview The HP running Microsoft Internet Security & Acceleration (ISA) Server 2006 is an advanced firewall, VPN, and Web caching solution that can be quickly and easily deployed in multiple network environments
More informationHP BladeSystem c-class Ethernet network adapters
HP BladeSystem c-class Ethernet network adapters Family data sheet HP NC552m 10 Gb Dual Port Flex-10 Ethernet Adapter HP NC551m Dual Port FlexFabric 10 Gb Converged Network Adapter HP NC550m 10 Gb Dual
More informationQuickSpecs. Models. HP NC510C PCIe 10 Gigabit Server Adapter. Overview
Overview The NC510C is a x8 PCI Express (PCIe) 10 Gigabit Ethernet CX4 (10GBASE-CX4 copper) network solution offering the highest bandwidth available in a ProLiant Ethernet adapter. This high-performance,
More informationMicrosoft Windows Server 2008 On Integrity Servers Overview
Overview The HP Integrity servers based on Intel Itanium architecture provide one of the most reliable and scalable platforms currently available for mission-critical Windows Server deployments. Windows
More informationOver 70% of servers within a data center are not connected to Fibre Channel SANs for any of the following reasons:
Overview The provides modular multi-protocol SAN designs with increased scalability, stability and ROI on storage infrastructure. Over 70% of servers within a data center are not connected to Fibre Channel
More informationRetired. Windows Server 2008 R2 for Itanium-Based Systems offers the following high-end features and capabilities:
Overview NOTE: HP no longer sells Microsoft Windows Server 2008/2008 R2 on Integrity servers. HP will continue to support Microsoft Windows Server 2008/2008 R2 until Microsoft's end of mainstream support
More informationQuickSpecs. Models. HP StorageWorks 8Gb PCIe FC HBAs Overview
Overview Models HP StorageWorks 81Q PCIe FC HBA Single Port HP StorageWorks 82Q PCIe FC HBA Dual Port HP StorageWorks 81E PCIe FC HBA Single Port HP StorageWorks 82E PCIe FC HBA Dual Port Description StorageWorks
More informationHP D6000 Disk Enclosure Direct Connect Cabling Guide
HP D6000 Disk Enclosure Direct Connect Cabling Guide Abstract This document provides cabling examples for when an HP D6000 Disk Enclosure is connected directly to a server. Part Number: 682251-001 September
More informationHP StorageWorks Single Channel 4Gb PCIe Fibre Channel Host Bus Adapter
:: Seite 1 von 7 :: Datenblatt zum Produkt Controller HBA FC1242 4Gb 2-CH mit DC# 441749 :: 4Gb PCI-e Host Bus Overview The 4Gb PCI-Express Host Bus (HBAs) support customers using ProLiant servers with
More informationHP StorageWorks D2D Backup Systems and StoreOnce
HP StorageWorks D2D Backup Systems and StoreOnce The combination that right-sizes your storage capacity. Solution brief AUtOMATEyour data protection. Regardless of size and industry, many of today s organisations
More informationSales Certifications
Sales Certifications Accredited Sales Professional HP Imaging & Printing Prerequisites Web-based Trainings Courses and Step 1 or or or or or Digital Workplace Sales DWS-SALES 1 Day Total Print Management
More informationModels Part Number Brocade HP 41B 4Gb PCIe FC HBA AP767B HP Single Channel 4Gb PCIe Fibre Channel Host Bus Adapter
Overview HP 4Gb PCIe Host Bus Adapters The HP 4Gb PCI-Express Host Bus Adapters (HBAs) support customers using HP ProLiant servers with PCIe I/O slots running a broad range of operating systems. These
More informationNOTE: A minimum of 1 gigabyte (1 GB) of server memory is required per each NC510F adapter. HP NC510F PCIe 10 Gigabit Server Adapter
Overview The NC510F is an eight lane (x8) PCI Express (PCIe) 10 Gigabit Ethernet SR (10GBASE-SR fiber optic) network solution offering the highest bandwidth available in a ProLiant Ethernet adapter. The
More informationProLiant Cluster HA/F500 for Enterprise Virtual Array Introduction Software and Hardware Pre-Checks Gathering Information...
Installation Checklist HP ProLiant Cluster F500 for Enterprise Virtual Array 4000/6000/8000 using Microsoft Windows Server 2003, Enterprise Edition Stretch Cluster May 2005 Table of Contents ProLiant Cluster
More informationConfiguring RAID with HP Z Turbo Drives
Technical white paper Configuring RAID with HP Z Turbo Drives HP Workstations This document describes how to set up RAID on your HP Z Workstation, and the advantages of using a RAID configuration with
More informationPerformance report for Microsoft Office Communications Server Consolidated Configuration on HP BladeSystem
Performance report for Microsoft Office Communications Server Consolidated Configuration on HP BladeSystem Introduction... 2 Test methodology... 4 Test topology... 5 SQL Server storage subsystem... 7 Test
More informationRetired. Microsoft iscsi Software Target for ProLiant Storage Servers Overview
Overview Microsoft iscsi Software Target adds block storage capability to ProLiant Storage and creates a single platform that delivers file, print, and application storage services, all over existing Ethernet
More informationQuickSpecs. HP Integrity Virtual Machines (Integrity VM) Overview. Currently shipping versions:
Currently shipping versions: HP Integrity VM (HP-UX 11i v2 VM Host) v3.5 HP Integrity VM (HP-UX 11i v3 VM Host) v4.0 Integrity Virtual Machines (Integrity VM) is a soft partitioning and virtualization
More informationHP BladeSystem Matrix Compatibility Chart
HP BladeSystem Matrix Compatibility Chart For supported hardware and software, including BladeSystem Matrix firmware set 1.01 Part Number 512185-003 December 2009 (Third Edition) Copyright 2009 Hewlett-Packard
More informationHP ProLiant m300 1P C2750 CPU 32GB Configure-to-order Server Cartridge B21
Overview HP Moonshot System with the HP ProLiant m300 server cartridge was created for cost-effective Dynamic Content Delivery, frontend Web and online analytics. The ProLiant m300 server cartridge has
More informationProtect enterprise data, achieve long-term data retention
Technical white paper Protect enterprise data, achieve long-term data retention HP StoreOnce Catalyst and Symantec NetBackup OpenStorage Table of contents Introduction 2 Technology overview 3 HP StoreOnce
More informationMicrosoft Office SharePoint Server 2007 with Windows 2008 and SQL Server 2008 on HP servers and storage technologies
Microsoft Office SharePoint Server 2007 with Windows 2008 and SQL Server 2008 on HP servers and storage technologies Executive summary... 2 Overview... 2 Performance tests... 4 hardware/software configuration...
More informationAssessing performance in HP LeftHand SANs
Assessing performance in HP LeftHand SANs HP LeftHand Starter, Virtualization, and Multi-Site SANs deliver reliable, scalable, and predictable performance White paper Introduction... 2 The advantages of
More informationHP Dynamic Deduplication achieving a 50:1 ratio
HP Dynamic Deduplication achieving a 50:1 ratio Table of contents Introduction... 2 Data deduplication the hottest topic in data protection... 2 The benefits of data deduplication... 2 How does data deduplication
More informationHP ProLiant DL580 G5. HP ProLiant BL680c G5. IBM p570 POWER6. Fujitsu Siemens PRIMERGY RX600 S4. Egenera BladeFrame PB400003R.
HP ProLiant DL58 G5 earns #1 overall four-processor performance; ProLiant BL68c takes #2 four-processor performance on Windows in two-tier SAP Sales and Distribution Standard Application Benchmark HP leadership
More informationHP StorageWorks Enterprise Virtual Array 4400 to 6400/8400 upgrade assessment
HP StorageWorks Enterprise Virtual Array 4400 to 6400/8400 upgrade assessment Part number: 5697-8185 First edition: June 2009 Legal and notice information Copyright 2009 Hewlett-Packard Development Company,
More informationSCALING UP VS. SCALING OUT IN A QLIKVIEW ENVIRONMENT
SCALING UP VS. SCALING OUT IN A QLIKVIEW ENVIRONMENT QlikView Technical Brief February 2012 qlikview.com Introduction When it comes to the enterprise Business Discovery environments, the ability of the
More informationQuickSpecs. Models. Standard Features Server Support. HP Integrity PCI-e 2-port 10GbE Cu Adapter. HP Integrity PCI-e 2-port 10GbE LR Adapter.
Overview The is an eight lane (x8) PCI Express (PCIe) 10 Gigabit network solution offering optimal throughput. This PCI Express Gen 2 adapter ships with two SFP+ (Small Form-factor Pluggable) cages suitable
More informationHPE Datacenter Care for SAP and SAP HANA Datacenter Care Addendum
HPE Datacenter Care for SAP and SAP HANA Datacenter Care Addendum This addendum to the HPE Datacenter Care Service data sheet describes HPE Datacenter Care SAP and SAP HANA service features, which are
More informationQuickSpecs. HP Integrity Virtual Machines (Integrity VM) Overview. Currently shipping versions:
Currently shipping versions: HP Integrity VM (HP-UX 11i v2 VM Host) v3.5 HP Integrity VM (HP-UX 11i v3 VM Host) v4.1 Integrity Virtual Machines (Integrity VM) is a soft partitioning and virtualization
More informationHP ProLiant DL385p Gen8 Server
DATASHEET Gen8 Server Flexibility to future-proof your infrastructure HP Proliant DL385 servers offer low list prices and high rewards The Gen8 server is purpose-built to: Redefine the customer experience
More informationExperiences with HP SFS / Lustre in HPC Production
Experiences with HP SFS / Lustre in HPC Production Computing Centre (SSCK) University of Karlsruhe Laifer@rz.uni-karlsruhe.de page 1 Outline» What is HP StorageWorks Scalable File Share (HP SFS)? A Lustre
More informationHP ProLiant BL35p Server Blade
Data sheet The new HP ProLiant BL35p two-way Server Blade delivers uncompromising manageability, maximum compute density and breakthrough power efficiencies to the high-performance data centre. The ProLiant
More informationHP StorageWorks Continuous Access EVA 2.1 release notes update
HP StorageWorks Continuous Access EVA 2.1 release notes update Part number: T3687-96038 Third edition: August 2005 Legal and notice information Copyright 2005 Hewlett-Packard Development Company, L.P.
More informationThe HP 3PAR Get Virtual Guarantee Program
Get Virtual Guarantee Internal White Paper The HP 3PAR Get Virtual Guarantee Program Help your customers increase server virtualization efficiency with HP 3PAR Storage HP Restricted. For HP and Channel
More informationQuickSpecs. Models HP I/O Accelerator Options. HP PCIe IO Accelerators for ProLiant Servers. Overview
Overview HP IO Accelerator for ProLiant servers is a direct attach solid state storage technology PCIe card based solution for application performance enhancement. Based on Multi-level cell (MLC) and Single
More informationHP EVA P6000 Storage performance
Technical white paper HP EVA P6000 Storage performance Table of contents Introduction 2 Sizing up performance numbers 2 End-to-end performance numbers 3 Cache performance numbers 4 Performance summary
More informationFind the right platform for your server needs
Product family guide Find the right platform for your server needs HP ProLiant Gen8 model transition guide The next generation There are mounting business pressures to support more applications and users,
More informationFully integrated and tested with most ProLiant servers and management software. See list of servers with each adapter specifications.
Overview Models 64-Bit/133-MHz Dual Channel Ultra320 SCSI Adapter 268351-B21 Performance Designed as to be as flexible as HP's legendary servers, the HP StorageWorks U320 SCSI adapter provides support
More informationHP SAS benchmark performance tests
HP SAS benchmark performance tests technology brief Abstract... 2 Introduction... 2 Test hardware... 2 HP ProLiant DL585 server... 2 HP ProLiant DL380 G4 and G4 SAS servers... 3 HP Smart Array P600 SAS
More informationQuickSpecs. Models. Key Features. Overview. Retired
Overview The HP StorageWorks Network Storage Router (NSR) N1200 is a key component in a complete data protection solution. It is a 1U rackmount router with one Fibre Channel port and two SCSI ports. It
More informationHP StorageWorks MSA/P2000 Family Disk Array Installation and Startup Service
HP StorageWorks MSA/P2000 Family Disk Array Installation and Startup Service HP Services Technical data The HP StorageWorks MSA/P2000 Family Disk Array Installation and Startup Service provides the necessary
More informationHP ProLiant ML370 G4 Storage Server
HP ProLiant ML370 G4 Storage Server Data sheet The HP ProLiant ML370 Storage Server is ideal for price conscious small to medium business (SMB) or remote offices that need more performance and scalability
More informationAvailable Packs and Purchase Information
Overview Rapid Deployment Pack (RDP) is a complete deployment solution for HP ProLiant servers. RDP automates the process of deploying and provisioning server software, enabling companies to quickly and
More informationHP Serviceguard for Linux Certification Matrix
Technical Support Matrix HP Serviceguard for Linux Certification Matrix Version 04.05, April 10 th, 2015 How to use this document This document describes OS, Server and Storage support with the listed
More informationHP MSA2000 Family VDS and VSS Hardware Providers installation guide
HP MSA2000 Family VDS and VSS Hardware Providers installation guide Part number: 485500-003 Second edition: February, 2009 Legal and notice information Copyright 2009 Hewlett-Packard Development Company,
More informationQuickSpecs. What's New The addition of VMware ESX Server and VMware Virtual Infrastructure Node (VIN)
Overview HP supports, certifies, and sells VMware Virtualization software on HP ProLiant servers. VMware from HP provides a comprehensive suite of virtualization solutions designed expressly for mission-critical
More informationQuickSpecs. What's New Support for QMH4062 1GbE iscsi 2-Port Adapter with Virtual Connect Kit Support for Virtual Connect Ethernet Modules
Overview The QMH4062 is a dual port fully integrated Gigabit Ethernet iscsi initiator mezzanine option optimized for iscsi traffic from an HP ProLiant server. This iscsi initiator is an alternative to
More informationHP AutoPass License Server
HP AutoPass License Server Software Version: 9.0 Windows, Linux and CentOS operating systems Support Matrix Document Release Date: October 2015 Software Release Date: October 2015 Page 2 of 10 Legal Notices
More informationWLAN high availability
Technical white paper WLAN high availability Table of contents Overview... 2 WLAN high availability implementation... 3 Fundamental high availability technologies... 3 AP connection priority... 3 AC selection...
More informationQuickSpecs. Models. HP Smart Array 642 Controller. Overview. Retired
Overview The Smart Array 642 Controller (SA-642) is a 64-bit, 133-MHz PCI-X, dual channel, SCSI array controller for entry-level hardwarebased fault tolerance. Utilizing both SCSI channels of the SA-642
More informationQuickSpecs. What's New New RoHS compliant HP 8Gb Fibre Channel HBAs. HP 8Gb PCI-e FC HBAs. Overview
Overview The HP 8Gb PCIe Fibre Channel Host Bus Adapters brings datacenter infrastructure components to a higher level of performance and efficiency with the ability to deliver twice the I/O performance
More informationXP7 High Availability User Guide
XP7 High Availability User Guide Abstract HPE XP7 High Availability helps you create and maintain a synchronous copy of critical data in a remote location. This document describes and provides instructions
More informationHP ProLiant delivers #1 overall TPC-C price/performance result with the ML350 G6
HP ProLiant ML350 G6 sets new TPC-C price/performance record ProLiant ML350 continues its leadership for the small business HP Leadership with the ML350 G6» The industry s best selling x86 2-processor
More informationTarget Environments The Smart Array 6i Controller offers superior investment protection to the following environments: Non-RAID
Overview The Smart Array 6i controller is an Ultra320 intelligent array controller for entry-level, hardware-based fault tolerance for protection of OS, applications, and logs. Most models have one internal-only
More informationQuickSpecs. Integrated NC7782 Gigabit Dual Port PCI-X LOM. Overview
Overview The integrated NC7782 dual port LOM incorporates a variety of features on a single chip for faster throughput than previous 10/100 solutions using Category 5 (or better) twisted-pair cabling,
More informationHP StorageWorks 4000/6000/8000 Enterprise Virtual Array connectivity for Sun Solaris installation and reference guide
HP StorageWorks 4000/6000/8000 Enterprise Virtual Array connectivity for Sun Solaris installation and reference guide Part number: 5697-5263 First edition: May 2005 Legal and notice information Copyright
More informationHPE ConvergedSystem 700 for Hyper-V Deployment Accelerator Service
Data sheet HPE ConvergedSystem 700 for Hyper-V Deployment Accelerator Service HPE Technology Consulting HPE ConvergedSystem 700 for Hyper-V is a solution that allows you to acquire and deploy a virtualization
More informationQuickSpecs. HPE Library and Tape Tools. Overview. Features & Benefits. What's New
Overview (L&TT) is a free, robust diagnostic tool for HPE StoreEver Tape Family. Targeted for a wide range of users, it is ideal for customers who want to verify their installation, ensure product reliability,
More informationQuickSpecs. What's New New 146GB Pluggable Ultra320 SCSI 15,000 rpm Universal Hard Drive. HP SCSI Ultra320 Hard Drive Option Kits (Servers) Overview
Overview A wide variety of rigorously tested, HP-qualified, SMART capable, Ultra320 Hard Drives offering data integrity and availability in hot pluggable and non-pluggable models. HP 15,000 rpm Hard Drives
More informationExchange Server 2007 Performance Comparison of the Dell PowerEdge 2950 and HP Proliant DL385 G2 Servers
Exchange Server 2007 Performance Comparison of the Dell PowerEdge 2950 and HP Proliant DL385 G2 Servers By Todd Muirhead Dell Enterprise Technology Center Dell Enterprise Technology Center dell.com/techcenter
More informationQuickSpecs. Models. HP NC380T PCI Express Dual Port Multifunction Gigabit Server Adapter. Overview
Overview The HP NC380T server adapter is the industry's first PCI Express dual port multifunction network adapter supporting TOE (TCP/IP Offload Engine) for Windows, iscsi (Internet Small Computer System
More informationQuickSpecs. ProLiant Cluster F500 for the Enterprise SAN. Overview. Retired
Overview The is designed to assist in simplifying the configuration of cluster solutions that provide the highest level of data and applications availability in the Windows Operating System environment
More informationHPE Enhanced Network Installation and Startup Service for HPE BladeSystem
Data sheet HPE Enhanced Network Installation and Startup Service for HPE BladeSystem HPE Lifecycle Event Services HPE Enhanced Network Installation and Startup Service for HPE BladeSystem provides configuration
More informationQuickSpecs. Models ProLiant Cluster F200 for the Entry Level SAN. Overview
Overview The is designed to assist in simplifying the configuration of cluster solutions that provide high levels of data and applications availability in the Microsoft Windows Operating System environment
More informationQuickSpecs. Models HP SC11Xe Host Bus Adapter B21. HP SC11Xe Host Bus Adapter. Overview
Overview The provides customers with the flexibility and speed they have come to expect from HP. This HBA is ideal for HP tape customers needing to attach Ultra320 tape backup devices on servers using
More informationQuickSpecs. Models SATA RAID Controller HP 6-Port SATA RAID Controller B21. HP 6-Port SATA RAID Controller. Overview.
Overview HP 6 Port SATA RAID controller provides customers with new levels of fault tolerance for low cost storage solutions using SATA hard drive technologies. Models SATA RAID Controller 372953-B21 DA
More informationKey results at a glance:
HP ProLiant BL680c G5 server blade takes world record for excellent performance for four-processor server on three-tier SAP SD Standard Application Benchmark with Microsoft Windows 2008. The HP Difference
More informationModels HP Security Management System XL Appliance with 500-IPS System License
Overview Models HP Security System Appliance with 25-IPS System License HP Security System XL Appliance with 500-IPS System License HP vsms for VMware vsphere single host Software License HP High Availability
More informationModels Smart Array 6402/128 Controller B21 Smart Array 6404/256 Controller B21
Overview The Smart Array 6400 high performance Ultra320, PCI-X controller family provides maximum performance, flexibility, and reliable data protection for HP ProLiant servers, through its unique modular
More informationPerformance of Mellanox ConnectX Adapter on Multi-core Architectures Using InfiniBand. Abstract
Performance of Mellanox ConnectX Adapter on Multi-core Architectures Using InfiniBand Abstract...1 Introduction...2 Overview of ConnectX Architecture...2 Performance Results...3 Acknowledgments...7 For
More informationThe HP Blade Workstation Solution A new paradigm in workstation computing featuring the HP ProLiant xw460c Blade Workstation
The HP Blade Workstation Solution A new paradigm in workstation computing featuring the HP ProLiant xw460c Blade Workstation Executive overview...2 HP Blade Workstation Solution overview...2 Details of
More informationQuickSpecs. What's New. Models. HP ProLiant Essentials Performance Management Pack version 4.5. Overview. Retired
Overview ProLiant Essentials Performance Management Pack (PMP) is a software solution that detects, analyzes, and explains hardware bottlenecks on HP ProLiant servers. HP Integrity servers and HP Storage
More informationQuickSpecs. HP Integrity Virtual Machines (Integrity VM) Overview. Retired. Currently shipping versions:
Currently shipping versions: HP Integrity VM (HP-UX 11i v3 VM Host) v4.2 HP Integrity VM (HP-UX 11i v2 VM Host) v3.5 Integrity Virtual Machines (also called Integrity VM or HPVM) is a hypervisor product
More informationTable of contents. OpenVMS scalability with Oracle Rdb. Scalability achieved through performance tuning.
OpenVMS scalability with Oracle Rdb Scalability achieved through performance tuning. Table of contents Abstract..........................................................2 From technical achievement to
More informationQuickSpecs. Models 64-Bit/133-MHz Dual Channel Ultra320 SCSI host bus adapter B bit/133-MHz Dual Channel Ultra320 SCSI host bus adapter
Overview The HP (HBA) provides customers with the flexibility and speed they have come to expect from HP. The 64-bit/133-MHz Dual Channel Ultra320 SCSI HBA is ideal for HP tape arrays and larger non- RAID
More informationHP LeftHand P4500 and P GbE to 10GbE migration instructions
HP LeftHand P4500 and P4300 1GbE to 10GbE migration instructions Part number: AT022-96003 edition: August 2009 Legal and notice information Copyright 2009 Hewlett-Packard Development Company, L.P. Confidential
More informationQuickSpecs. Compaq Smart Array 431 Controller M ODELS
M ODELS Smart Array 431 Controller 127695-B21 127695-291(Japan) Data Compatibility Software Consistency Wide Ultra3 SCSI 64-bit Architecture 64-bit PCI Bus Design Single internal/external SCSI channel
More informationMELLANOX MTD2000 NFS-RDMA SDK PERFORMANCE TEST REPORT
MELLANOX MTD2000 NFS-RDMA SDK PERFORMANCE TEST REPORT The document describes performance testing that was done on the Mellanox OFED 1.2 GA NFS-RDMA distribution. Test Cluster Mellanox Technologies 1 July
More informationHP Storage Provisioning Manager HP 3PAR StoreServ Peer Persistence
Technical white paper HP Storage Provisioning Manager HP 3PAR StoreServ Peer Persistence Handling HP 3PAR StoreServ Peer Persistence with HP Storage Provisioning Manager Click here to verify the latest
More informationQuickSpecs. Key Features and Benefits. HP C-Series MDS 9000 Storage Media Encryption (SME) Software. Overview. Retired
Overview MDS 9000 Storage Media Encryption (SME) secures data stored on tape drives and virtual tape libraries (VTLs) in a storage area network (SAN) environment using secure IEEE standard Advanced Encryption
More informationQuickSpecs. HP NC6170 PCI-X Dual Port 1000SX Gigabit Server Adapter. Overview. Retired
The is a dual port fiber Gigabit server adapter that runs over multimode fiber cable. It is the first HP server adapter to combine dual port Gigabit Ethernet speed with PCI-X bus technology for fiber-optic
More informationHPE OneView for Microsoft System Center Release Notes (v 8.2 and 8.2.1)
Center Release Notes (v 8.2 and 8.2.1) Part Number: 832154-004a Published: April 2017 Edition: 2 Contents Center Release Notes (v 8.2 and 8.2.1)... 4 Description...4 Update recommendation... 4 Supersedes...
More informationAchieve Patch Currency for Microsoft SQL Server Clustered Environments Using HP DMA
Technical white paper Achieve Patch Currency for Microsoft SQL Server Clustered Environments Using HP DMA HP Database and Middleware Automation version 10.30 Table of Contents Purpose 2 Prerequisites 4
More informationHP XP7 High Availability User Guide
HP XP7 High Availability User Guide Abstract HP XP7 High Availability helps you create and maintain a synchronous copy of critical data in a remote location. This document describes and provides instructions
More informationQuickSpecs. Models. Overview
Overview The HP Smart Array P800 is HP's first 16 port serial attached SCSI (SAS) RAID controller with PCI-Express (PCIe). It is the highest performing controller in the SAS portfolio and provides new
More informationQuickSpecs. Models 64-Bit/133-MHz Dual Channel Ultra320 SCSI host bus adapter B bit/133-MHz Dual Channel Ultra320 SCSI host bus adapter
Overview The HP (HBA) provides customers with the flexibility and speed they have come to expect from HP. The 64-bit/133-MHz Dual Channel Ultra320 SCSI HBA is ideal for HP tape arrays and larger non- RAID
More information