Comparison of Storage Protocol Performance ESX Server 3.5

Similar documents
W H I T E P A P E R. Comparison of Storage Protocol Performance in VMware vsphere 4

Recommendations for Aligning VMFS Partitions

Configuration Maximums VMware Infrastructure 3: ESX Server 3.5 Update 2, ESX Server 3i version 3.5 Update 2, VirtualCenter 2.

Performance Report: Multiprotocol Performance Test of VMware ESX 3.5 on NetApp Storage Systems

Large Page Performance ESX Server 3.5 and ESX Server 3i v3.5

Managing Performance Variance of Applications Using Storage I/O Control

Configuring iscsi in a VMware ESX Server 3 Environment B E S T P R A C T I C E S

W H I T E P A P E R. What s New in VMware vsphere 4: Performance Enhancements

What s New in VMware vsphere 4.1 Performance. VMware vsphere 4.1

Performance of Virtual Desktops in a VMware Infrastructure 3 Environment VMware ESX 3.5 Update 2

Using vmkusage to Isolate Performance Problems

NIC TEAMING IEEE 802.3ad

Setup for Microsoft Cluster Service Update 1 Release for ESX Server 3.5, ESX Server 3i version 3.5, VirtualCenter 2.5

Getting Started with ESX Server 3i Embedded ESX Server 3i version 3.5 Embedded and VirtualCenter 2.5

Emulex LPe16000B 16Gb Fibre Channel HBA Evaluation

Microsoft Exchange Server 2010 Performance on VMware vsphere 5

Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS

Configuration Maximums VMware vsphere 5.0

SAN Virtuosity Fibre Channel over Ethernet

Performance Implications of Storage I/O Control Enabled NFS Datastores in VMware vsphere 5.0

Storage Compatibility Guide for ESX Server 3.0

VMware Infrastructure 3 Primer Update 2 and later for ESX Server 3.5, ESX Server 3i version 3.5, VirtualCenter 2.5

NAS for Server Virtualization Dennis Chapman Senior Technical Director NetApp

Getting Started with ESX Server 3i Installable Update 2 and later for ESX Server 3i version 3.5 Installable and VirtualCenter 2.5

Notes on Using the Beta VMware Importer Tool on Your Mac VMware Importer 1 Beta 2

Protecting Mission-Critical Workloads with VMware Fault Tolerance W H I T E P A P E R

vcloud Automation Center Reference Architecture vcloud Automation Center 5.2

VERSION 1.0. VMware Guest SDK. Programming Guide

8Gb Fibre Channel Adapter of Choice in Microsoft Hyper-V Environments

VDI Server Sizing and Scaling

CLOUD PROVIDER POD RELEASE NOTES

VMware Infrastructure 3

EMC Backup and Recovery for Microsoft SQL Server

IBM Emulex 16Gb Fibre Channel HBA Evaluation

Virtual Machine Backup Guide Update 2 Release for ESX Server 3.5, ESX Server 3i version 3.5, VirtualCenter 2.5

PAC094 Performance Tips for New Features in Workstation 5. Anne Holler Irfan Ahmad Aravind Pavuluri

Guest Operating System Installation Guide. May 28, 2008

VMware VMFS Volume Management VMware Infrastructure 3

WHITE PAPER. Optimizing Virtual Platform Disk Performance

CLOUD PROVIDER POD RELEASE NOTES

VMware Infrastructure Update 1 for Dell PowerEdge Systems. Deployment Guide. support.dell.com

Guest Operating System Installation Guide. February 25, 2008

Guest Operating System Installation Guide. March 14, 2008

Storage / SAN Compatibility Guide For ESX Server 3.x

Microsoft SQL Server in a VMware Environment on Dell PowerEdge R810 Servers and Dell EqualLogic Storage

Mobile Secure Desktop Implementation with Pivot3 HOW-TO GUIDE

PRESENTATION TITLE GOES HERE

CLOUD PROVIDER POD. for VMware. Release Notes. VMware Cloud Provider Pod January 2019 Check for additions and updates to these release notes

The QLogic 8200 Series is the Adapter of Choice for Converged Data Centers

VMware Infrastructure Update 1 for Dell PowerEdge Systems. Deployment Guide. support.dell.com

PERFORMANCE STUDY OCTOBER 2017 ORACLE MONSTER VIRTUAL MACHINE PERFORMANCE. VMware vsphere 6.5

EMC Business Continuity for Microsoft Applications

Configuration Maximums

VMware VMmark V1.1 Results

Microsoft Office SharePoint Server 2007

Performance & Scalability Testing in Virtual Environment Hemant Gaidhani, Senior Technical Marketing Manager, VMware

USING ISCSI AND VERITAS BACKUP EXEC 9.0 FOR WINDOWS SERVERS BENEFITS AND TEST CONFIGURATION

VMware Infrastructure 3.5 for Dell PowerEdge Systems. Deployment Guide. support.dell.com

Use of the Internet SCSI (iscsi) protocol

A Performance Characterization of Microsoft SQL Server 2005 Virtual Machines on Dell PowerEdge Servers Running VMware ESX Server 3.

Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS

EMC Virtual Infrastructure for Microsoft Exchange 2007

Eliminate the Complexity of Multiple Infrastructure Silos

Microsoft Exchange Server 2010 workload optimization on the new IBM PureFlex System

Storage / SAN Compatibility Guide For ESX Server 3.5 and ESX Server 3i

Getting Started with ESX

Getting Started with ESXi Embedded

Programming Guide Guest SDK 3.5

vrealize Business System Requirements Guide

Getting Started with ESX

Veritas Access. Installing Veritas Access in VMWare ESx environment. Who should read this paper? Veritas Pre-Sales, Partner Pre-Sales

Getting Started with VMware Fusion VMware Fusion for Mac OS X Version 1.0

Configuration Maximums. Update 1 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5

EMC Unified Storage for Oracle Database 11g/10g Virtualized Solution. Enabled by EMC Celerra and Linux using FCP and NFS. Reference Architecture

Virtualized SAP Performance with VMware vsphere 4 W H I T E P A P E R

IOmark- VM. HP MSA P2000 Test Report: VM a Test Report Date: 4, March

Copyright 2009 by Scholastic Inc. All rights reserved. Published by Scholastic Inc. PDF0090 (PDF)

Evaluation of the Chelsio T580-CR iscsi Offload adapter

How to Use a Tomcat Stack on vcloud to Develop Optimized Web Applications. A VMware Cloud Evaluation Reference Document

VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE

MongoDB on Kaminario K2

iscsi Design Considerations and Deployment Guide VMware Infrastructure 3

Reference Architecture. 28 MAY 2018 vrealize Operations Manager 6.7

Performance Characterization of ONTAP Cloud in Amazon Web Services with Application Workloads

Evaluation Report: HP StoreFabric SN1000E 16Gb Fibre Channel HBA

EMC Backup and Recovery for Microsoft Exchange 2007 SP1. Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3.

Dell Reference Configuration for Large Oracle Database Deployments on Dell EqualLogic Storage

Reference Architecture

Storage Considerations for VMware vcloud Director. VMware vcloud Director Version 1.0

Enforcing Patch Management

VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN

Active System Manager Release 8.2 Compatibility Matrix

PERFORMANCE CHARACTERIZATION OF MICROSOFT SQL SERVER USING VMWARE CLOUD ON AWS PERFORMANCE STUDY JULY 2018

Reference Architecture. Modified on 17 AUG 2017 vrealize Operations Manager 6.6

Microsoft E xchange 2010 on VMware

IT Business Management System Requirements Guide

10Gb iscsi Initiators

Architecture and Performance Implications

IBM Spectrum NAS. Easy-to-manage software-defined file storage for the enterprise. Overview. Highlights

Transcription:

Performance Study Comparison of Storage Protocol Performance ESX Server 3.5 This study provides performance comparisons of various storage connection options available to VMware ESX Server. We used the widely used IOmeter benchmark for the comparison. The results show that all four network storage options can reach wire speed throughput when properly configured. The wire speed throughput is also maintained with multiple virtual machines driving concurrent I/Os, indicating that the limiting factor in performance scalability is not in ESX Server. The data also demonstrates that, although has the best performance in throughput, latency, and CPU efficiency among the four options, iscsi and are also quite capable and may offer a better cost to performance ratio in certain deployment scenarios. This study covers the following topics: Hardware and Software Environment on page 1 Performance Results on page 3 Conclusion on page 6 Hardware and Software Environment Tables 1 and 2 provide details of the test environment. Table 1. ESX Server host and storage components Component Details Hypervisor VMware ESX Server 3.5 Processors Memory NICs for and software iscsi network HBA IP network for and iscsi iscsi HBA File system Storage server/array Four 2.4GHz dual core AMD Opteron processors 32GB 1Gb on board NICs 2Gb FC AL QLogic QLA234 1Gb Ethernet with a dedicated switch and VLAN QLogic QLA45C and iscsi: None. RDM physical was used. : Native file system on the server. One server supporting FC, iscsi, and. (All protocols accessed the same LUNs.) Copyright 28 VMware, Inc. All rights reserved. 1

Table 2. Virtual machine configuration Component Operating system Version Details Windows 23 Enterprise Number of virtual processor 1 Memory Virtual disk size for data File system 256MB 1MB to achieve cached runs effect None. Physical drives are used. To obtain optimal performance for large I/O sizes, we modified a registry setting in the guest operating system. For details, see http://www.lsilogic.com/files/support/ssp/fusionmpt/win23/symmpi23_112.txt. NOTE Because we used no file system in the data path in the virtual machine, no caching is involved in the guest operating system. As a result, the amount of memory configured for the virtual machine has no impact on the performance in these tests if it satisfies the minimum requirement. Workload We used the IOmeter benchmarking tool (http://sourceforge.net/projects/iometer), originally developed at Intel, to generate I/O load for these tests. We held the IOmeter parameters shown in Table 3 constant for each individual test. Table 3. IOmeter configuration Parameter Value Number of outstanding I/Os 16 Run time Ramp up time 3 minutes 6 seconds Number of workers to spawn automatically 1 To focus on the performance of storage protocols, we adopted the cached run approach often used in the industry when there is a need to minimize latency effects from the physical disk. In such an experimental setup, the entire I/O working set resides in the cache of the storage server or array. Because no mechanical movement is involved in serving a read request, maximum possible read performance can be achieved from the storage device. This also ensures that the degree of randomness in the workload has nearly no effect on throughput or response time and run to run variance becomes extremely low. However, even in cached runs, write performance can still depend on the available disk bandwidth of the storage device. If the incoming rate of write requests outpaces the server s or array s ability to write the dirty blocks to disk, after the write cache is filled and steady state reached, a new write request can be completed only if certain blocks in the cache are physically written to disk and marked as free. For this reason, read performance in cached runs better represents the true performance capability of the host and the storage protocol regardless of the type of storage server or array used. Copyright 28 VMware, Inc. All rights reserved. 2

Performance Results Figure 1 shows the sequential read throughput in MB/sec achieved by running a single virtual machine in the standard workload configuration through each storage connection option. The results indicate that for I/O sizes at or above 64KB, the limitation presented by a 2Gb link is reached. As for all IP based storage connections, the 1Gb wire speed is reached with I/O sizes at or above 16KB. Figure 1. Single virtual machine throughput comparison, 1 percent sequential read (higher is better) Throughput (MB/sec) 2 18 16 14 12 1 8 6 4 2 Figure 2 shows the sequential write throughput comparison. Because of the disk bandwidth limitation from the storage sever or array, the maximum write throughput is consistently lower than read throughput. Also, the slightly lower performance with block sizes of 256KB and beyond is a result of the inability of the storage devices to handle large block sizes well. This is entirely an effect of the storage device, not the ESX Server host. If the intended workload constantly issues large writes, see VMware knowledge base article 13469 Tuning ESX Server 3.5 for Better Storage Performance by Modifying the Maximum I/O Block Size (http://kb.vmware.com/kb/13469) for a workaround for this issue. Figure 2. Single virtual machine throughput comparison, 1 percent sequential write (higher is better) 16 Throughput (MB/sec) 14 12 1 8 6 4 2 Copyright 28 VMware, Inc. All rights reserved. 3

Figure 3 and Figure 4 show the response time measurements from the same tests. All the IP based options have very similar characteristics in processing read requests, though shows slightly longer response times in processing large writes. In general, has the performance advantage in larger I/O sizes mainly because of its higher wire speed. The difference in throughput and response time among the three IP based storage connections is negligible in nearly all cases. Figure 3. Single virtual machine response time comparison, 1 percent sequential read (lower is better) 8 Response time (ms) 7 6 5 4 3 2 1 Figure 4. Single virtual machine response time comparison, 1 percent sequential write (lower is better) 1 Response time (ms) 9 8 7 6 5 4 3 2 1 When you deploy a hypervisor such as ESX Server to consolidate physical servers, you expect multiple virtual machines to perform I/O concurrently. It is therefore very important that the hypervisor itself does not become a performance bottleneck in such scenarios. Figure 5 shows the aggregate throughput of 1, 4, 8, 16, and 32 virtual machines driving 64KB sequential reads to each virtual machine s dedicated LUN. In all cases, the data is moving at wire speed. Clearly, in each storage connection option available to ESX Server, wire speed is maintained up to 32 virtual machines, showing that wire speed, rather than ESX Server, limits the scalability. Copyright 28 VMware, Inc. All rights reserved. 4

Figure 5. Multiple virtual machine scalability, 1 percent sequential read (higher is better) 2 18 16 Throughput (MB/sec) 14 12 1 8 6 4 2 1 4 8 16 32 Number of virtual machines The data shown so far has demonstrated that all storage connection options available to ESX Server perform well, achieving and maintaining wire speed even in the cases with large numbers of virtual machines performing concurrent disk I/Os. In light of this, because of its intrinsic wire speed advantage, especially if the 4Gb technology is deployed, is likely the best choice if maximum possible throughput is the primary goal. Nevertheless, the data also demonstrates other options being quite capable and, depending on workloads and other factors, iscsi and may offer a better price performance ratio. In addition to aggregate throughput, another consideration is the CPU cost incurred for I/O operations. When you consolidate storage I/O heavy applications, given a fixed amount of CPU power available to the ESX Server host, the CPU cost of performing I/O operations may play a major role in determining the consolidation ratio you can achieve. Figure 6 shows the CPU cost compared to the cost when a virtual machine is driving a mix of 5 percent reads and 5 percent writes at the 8KB block size, which is most common in enterprise class applications such as database servers. and hardware iscsi HBAs both offload a major part of the protocol processing from the host CPUs and thus have very low CPU costs in most cases. For software iscsi and, host CPUs are involved in protocol processing, so higher CPU costs are expected. However, software iscsi and are fully capable of delivering the expected performance, as shown in earlier results, when CPU resource availability does not become a bottleneck. Copyright 28 VMware, Inc. All rights reserved. 5

Figure 6. CPU cost per I/O compared to cost for 8KB 5 percent read and 5 percent write (lower is better) 3 Ratio to 2.5 2 1.5 1.5 Fibre Channel Hardware iscsi Software iscsi Conclusion This paper demonstrates that the four network storage connection options available to ESX Server are all capable of reaching a level of performance limited only by the media and storage devices. And even with multiple virtual machines running concurrently on the same ESX Server host, the high performance is maintained. The data on CPU costs indicates that and hardware iscsi are the most CPU efficient, but in cases in which CPU consumption is not a concern, software iscsi and can also be part of a high performance solution. VMware, Inc. 341 Hillview Ave., Palo Alto, CA 9434 www.vmware.com Copyright 28 VMware, Inc. All rights reserved. Protected by one or more of U.S. Patent Nos. 6,397,242, 6,496,847, 6,74,925, 6,711,672, 6,725,289, 6,735,61, 6,785,886, 6,789,156, 6,795,966, 6,88,22, 6,944,699, 6,961,86, 6,961,941, 7,69,413, 7,82,598, 7,89,377, 7,111,86, 7,111,145, 7,117,481, 7,149, 843, 7,155,558, 7,222,221, 7,26,815, 7,26,82, 7,269,683, 7,275,136, 7,277,998, 7,277,999, 7,278,3, 7,281,12, and 7,29,253; patents pending. VMware, the VMware boxes logo and design, Virtual SMP and VMotion are registered trademarks or trademarks of VMware, Inc. in the United States and/or other jurisdictions. Microsoft, Windows and Windows NT are registered trademarks of Microsoft Corporation. Linux is a registered trademark of Linus Torvalds. All other marks and names mentioned herein may be trademarks of their respective companies. Revision 28122 Item: PS-47-PRD-1-1 6