W H I T E P A P E R. What s New in VMware vsphere 4: Performance Enhancements

Similar documents
Performance & Scalability Testing in Virtual Environment Hemant Gaidhani, Senior Technical Marketing Manager, VMware

What s New in VMware vsphere 4.1 Performance. VMware vsphere 4.1

W H I T E P A P E R. Comparison of Storage Protocol Performance in VMware vsphere 4

Configuration Maximums VMware vsphere 5.0

Comparison of Storage Protocol Performance ESX Server 3.5

PERFORMANCE CHARACTERIZATION OF MICROSOFT SQL SERVER USING VMWARE CLOUD ON AWS PERFORMANCE STUDY JULY 2018

SAN Virtuosity Fibre Channel over Ethernet

Storage Considerations for VMware vcloud Director. VMware vcloud Director Version 1.0

Configuration Maximums VMware Infrastructure 3: ESX Server 3.5 Update 2, ESX Server 3i version 3.5 Update 2, VirtualCenter 2.

W H I T E P A P E R. What s New in VMware vsphere 4: Virtual Networking

VMware Performance Overview

NAS for Server Virtualization Dennis Chapman Senior Technical Director NetApp

What s New in VMware vsphere 4: Virtual Networking W H I T E P A P E R

VMWARE TUNING BEST PRACTICES FOR SANS, SERVER, AND NETWORKS

What Is New in VMware vcenter Server 4 W H I T E P A P E R

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE

Managing Performance Variance of Applications Using Storage I/O Control

What s New in VMware vsphere 4:

VMware vsphere 4. The Best Platform for Building Cloud Infrastructures

Configuring iscsi in a VMware ESX Server 3 Environment B E S T P R A C T I C E S

PERFORMANCE STUDY OCTOBER 2017 ORACLE MONSTER VIRTUAL MACHINE PERFORMANCE. VMware vsphere 6.5

Microsoft Exchange Server 2010 Performance on VMware vsphere 5

Adobe Acrobat Connect Pro 7.5 and VMware ESX Server

What s New in VMware vsphere 5.1 Platform

What s New in VMware vsphere Flash Read Cache TECHNICAL MARKETING DOCUMENTATION

Pivot3 Acuity with Microsoft SQL Server Reference Architecture

vnetwork Future Direction Howie Xu, VMware R&D November 4, 2008

Setup for Microsoft Cluster Service Update 1 Release for ESX Server 3.5, ESX Server 3i version 3.5, VirtualCenter 2.5

VMware vsphere with ESX 4.1 and vcenter 4.1

Microsoft E xchange 2010 on VMware

Certified Reference Design for VMware Cloud Providers

Eliminate the Complexity of Multiple Infrastructure Silos

NIC TEAMING IEEE 802.3ad

VMWARE EBOOK. Easily Deployed Software-Defined Storage: A Customer Love Story

Performance Implications of Storage I/O Control Enabled NFS Datastores in VMware vsphere 5.0

How it can help your organisation

Architecture and Performance Implications

WHAT S NEW IN PERFORMANCE?

Preparing Virtual Machines for Cisco APIC-EM

Preparing Virtual Machines for Cisco APIC-EM

Virtualized SAP Performance with VMware vsphere 4 W H I T E P A P E R

Microsoft SQL Server in a VMware Environment on Dell PowerEdge R810 Servers and Dell EqualLogic Storage

VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN

Configuration Maximums

ESX 5.5 ESX 5.1 ESX 5 ESX

What s New in VMware vsphere 5.1 Platform

OpenNebula on VMware: Cloud Reference Architecture

Performance Report: Multiprotocol Performance Test of VMware ESX 3.5 on NetApp Storage Systems

W H I T E P A P E R. Virtual Machine Monitor Execution Modes in VMware vsphere 4.0

EMC Business Continuity for Microsoft Applications

VVD for Cloud Providers: Scale and Performance Guidelines. October 2018

CLOUD PROVIDER POD RELEASE NOTES

Setup for Failover Clustering and Microsoft Cluster Service. Update 1 16 OCT 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.

Cisco HyperFlex All-Flash Systems for Oracle Real Application Clusters Reference Architecture

EMC Unified Storage for Oracle Database 11g/10g Virtualized Solution. Enabled by EMC Celerra and Linux using FCP and NFS. Reference Architecture

Stellar performance for a virtualized world

Setup for Failover Clustering and Microsoft Cluster Service

VMware vshield App Design Guide TECHNICAL WHITE PAPER

VMware vfabric Data Director Installation Guide

Fujitsu PRIMEFLEX for VMware vsan 20,000 User Mailbox Exchange 2016 Mailbox Resiliency Storage Solution

Unify Virtual and Physical Networking with Cisco Virtual Interface Card

Citrix XenServer 7.1 Feature Matrix

VMware vsphere 6.5 Boot Camp

Performance of vsphere 6.7 Scheduling Options

Configuration Maximums. Update 1 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5

VMware Virtual SAN. Technical Walkthrough. Massimiliano Moschini Brand Specialist VCI - vexpert VMware Inc. All rights reserved.

Getting Started with ESX

EMC Performance Optimization for VMware Enabled by EMC PowerPath/VE

Getting Started with ESX Server 3i Installable Update 2 and later for ESX Server 3i version 3.5 Installable and VirtualCenter 2.5

ARISTA: Improving Application Performance While Reducing Complexity

IOmark- VM. HP MSA P2000 Test Report: VM a Test Report Date: 4, March

7 Things ISVs Must Know About Virtualization

vsphere Installation and Setup Update 2 Modified on 10 JULY 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5

Reference Architecture for Dell VIS Self-Service Creator and VMware vsphere 4

Microsoft Exchange Server 2010 workload optimization on the new IBM PureFlex System

EMC Virtual Infrastructure for Microsoft Exchange 2007

SAP Solutions on VMware vsphere : High Availability

VMware Overview VMware Infrastructure 3: Install and Configure Rev C Copyright 2007 VMware, Inc. All rights reserved.

Evaluation Report: HP StoreFabric SN1000E 16Gb Fibre Channel HBA

vsphere Networking Update 2 VMware vsphere 5.5 VMware ESXi 5.5 vcenter Server 5.5 EN

VCP410 VMware vsphere Cue Cards

vsphere Upgrade Guide

Getting Started with ESX

vsphere 4 The Best Platform for Business-Critical Applications Gaetan Castelein Sr Product Marketing Manager VMware, Inc.

vsphere Storage Update 1 Modified 16 JAN 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5

Protecting Mission-Critical Workloads with VMware Fault Tolerance W H I T E P A P E R

By the end of the class, attendees will have learned the skills, and best practices of virtualization. Attendees

Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS

Getting Started with ESXi Embedded

Virtualization of the MS Exchange Server Environment

[TITLE] Virtualization 360: Microsoft Virtualization Strategy, Products, and Solutions for the New Economy

VMware Infrastructure 3 Primer Update 2 and later for ESX Server 3.5, ESX Server 3i version 3.5, VirtualCenter 2.5

Accelerating Microsoft SQL Server 2016 Performance With Dell EMC PowerEdge R740

2014 LENOVO INTERNAL. ALL RIGHTS RESERVED.

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture

CLOUD PROVIDER POD. for VMware. Release Notes. VMware Cloud Provider Pod January 2019 Check for additions and updates to these release notes

IOmark- VM. IBM IBM FlashSystem V9000 Test Report: VM a Test Report Date: 5, December

IOmark- VM. HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VM- HC b Test Report Date: 27, April

Performance of Virtual Desktops in a VMware Infrastructure 3 Environment VMware ESX 3.5 Update 2

Setup for Failover Clustering and Microsoft Cluster Service

Transcription:

W H I T E P A P E R What s New in VMware vsphere 4: Performance Enhancements

Scalability Enhancements...................................................... 3 CPU Enhancements............................................................ 4 Memory Enhancements........................................................ 4 Storage Enhancements......................................................... 5 Networking Enhancements..................................................... 7 Resource Management Enhancements.......................................... 8 Performance Management Enhancements....................................... 9 Application Performance...................................................... 9 Oracle............................................................................... 9 SQL Server.......................................................................... 1 SAP................................................................................ 1 Exchange.......................................................................... 11 Summary..................................................................... 12 References................................................................... 12 2

VMware vsphere 4, the industry s first cloud operating system, includes several unique new features that allow IT organizations to leverage the benefits of cloud computing, with maximum efficiency, uncompromised control, and flexibility of choice. The new VMware vsphere 4 provides significant performance enhancements that make it easier for organizations to virtualize their most demanding and intense workloads. These performance enhancements provide VMware vsphere 4 with better: Efficiency: Optimizations resulting in reduced virtualization overheads and highest consolidation ratios. Control: Enhancements leading to improved ongoing performance monitoring and management, as well as dynamic resource sizing for better scalability. Choice: Improvements that provide several options of guest OS, virtualization technologies, comprehensive HCL, integrations with 3rd-party management tools to choose from. This document outlines the key performance enhancements of VMware vsphere 4, organized into following categories: Scalability Enhancements CPU, Memory, Storage, Networking Resource Management Performance Management Finally, the white paper showcases the performance improvements in various tier-1 enterprise applications as a result of these benefits. Scalability Enhancements A summary of the key new scalability improvements of vsphere 4 as compared to VMware s previous datacenter product, VMware Infrastructure 3 (VI3), is shown in the following table: Feature VI3 vsphere 4 Virtual Machine CPU Count 4 vcpus 8 vcpus Virtual Machine Memory Maximum 64 GB 255 GB Host CPU Core Maximum 32 cores 64 cores Host Memory Maximum 256 GB 1 TB Powered-on VMs per ESX/ESXi Maximum 128 256 For details see Systems Compatibility Guide and Guest Operating System Installation Guide. Additional changes that enhance the scalability of vsphere include: 64 Logical CPUs and 256 Virtual CPUs Per Host ESX/ESXi 4. provides headroom for more virtual machines per host and the ability to achieve even higher consolidation ratios on larger machines. 64-bit VMkernel The VMkernel, a core component of the ESX/ESXi 4. hypervisor, is now 64-bit. This provides greater host physical memory capacity and more seamless hardware support than earlier releases. 64-bit Service Console The Linux-based Service Console for ESX 4. has been upgraded to a 64-bit version derived from a recent release of a leading Enterprise Linux vendor. 3

New Virtual Hardware ESX/ESXi 4. introduces a new generation of virtual hardware (virtual hardware version 7) which adds significant new features including: Serial Attached SCSI (SAS) virtual device for Microsoft Cluster Service Provides support for running Windows Server 28 in a Microsoft Cluster Service configuration. IDE virtual device Ideal for supporting older operating systems that lack SCSI drivers. VMXNET Generation 3 See the Networking section. Virtual Machine Hot Plug Support Provides support for adding and removing virtual devices, adding virtual CPUs, and adding memory to a virtual machine without having to power off the virtual machine. Hardware version 7 is the default for new ESX/ESXi 4. virtual machines. ESX/ESXi 4. will continue to run virtual machines created on hosts running ESX Server versions 2.x and 3.x. Virtual machines that use virtual hardware version 7 features are not compatible with ESX/ESXi releases prior to version 4.. VMDirectPath for Virtual Machines VMDirectPath I/O device access enhances CPU efficiency in handling workloads that require constant and frequent access to I/O devices by allowing virtual machines to directly access the underlying hardware devices. Other virtualization features, such as VMotion, hardware independence and sharing of physical I/O devices will not be available to the virtual machines using this feature. VMDirectPath I/O for networking I/O devices is fully supported with the Intel 82598 1 Gigabit Ethernet Controller and Broadcom 5771 and 57711 1 Gigabit Ethernet Controller. It is experimentally supported for storage I/O devices with the QLogic QLA25xx 8Gb Fibre Channel, the Emulex LPe12 8Gb Fibre Channel, and the LSI 3442e-R and 381e (168 chip based) 3Gb SAS adapters. Increased NFS Datastore Support ESX now supports up to 64 NFS shares as datastores in a cluster. CPU Enhancements Resource Management and Processor Scheduling The ESX 4. scheduler includes several new features and enhancements that help improve the throughput of all workloads, with notable gains in I/O intensive workloads. This includes: Relaxed co-scheduling of vcpus, introduced in earlier versions of ESX, has been further fine-tuned especially for SMP VMs. ESX 4. scheduler utilizes new finer-grained locking that reduces scheduling overheads in cases where frequent scheduling decisions are needed. The new scheduler is aware of processor cache topology and takes into account the processor cache architecture to optimize CPU usage. For I/O intensive workloads, interrupt delivery and the associated processing costs make up a large component of the virtualization overhead. The above scheduler enhancements greatly improve the efficiency of interrupt delivery and associated processing. Memory Enhancements Hardware-assisted Memory Virtualization Memory management in virtual machines differs from physical machines in one key aspect: virtual memory address translation. Guest virtual memory addresses must be translated first to guest physical addresses using the guest OS's page tables before finally being translated to machine physical memory addresses. The latter step is performed by ESX by means of a set of shadow page tables for each virtual machine. Creating and maintaining the shadow page tables adds both CPU and memory overhead. Hardware support is available in current processors to alleviate this situation. Hardware-assisted memory management capabilities from Intel and AMD are called EPT and RVI, respectively. This support consists of a second level of page tables implemented in hardware. These page tables contain guest physical to machine memory address translations. ESX 4. introduces support for the Intel Xeon processors that support EPT. Support for AMD RVI has existed since ESX 3.5. 4

Figure 1 illustrates efficiency improvements seen for a few example workloads when using hardware-assisted memory virtualization. Figure 1 Efficiency improvements using hardware-assisted memory virtualization Efficiency Improvement 6% 5% Efficiency Improvement 4% 3% 2% 1% % Apache Compile SQL Server Citrix XenApp While this hardware support obviates the need for maintaining shadow page tables (and the associated performance overhead) it introduces some costs of its own. Translation look-aside buffer (TLB) miss costs, in the form of increased latency, are higher with two-level page tables than with the one-level table. Using large memory pages, a feature that has been available since ESX 3.5, the number of TLB misses can be reduced. Since TLB miss latency is higher with this form of hardware virtualization assist but large pages reduce the number of TLB misses, the combination of hardware assist and large page support that exists in vsphere yields optimal performance. Storage Enhancements A variety of architectural improvements have been made to the storage subsystem of vsphere 4. The combination of the new paravirtualized SCSI driver, and additional ESX kernel-level storage stack optimizations dramatically improves storage I/O performance with these improvements, all but a very small segment of the most I/O intensive applications become attractive targets for VMware virtualization. VMware Paravirtualized SCSI (PVSCSI) Emulated versions of hardware storage adapters from BusLogic and LSILogic were the only choices available in earlier ESX releases. The advantage of this full virtualization is that most operating systems ship drivers for these devices. However, this precludes the use of performance optimizations that are possible in virtualized environments. To this end, ESX 4. ships with a new virtual storage adapter Paravirtualized SCSI (PVSCSI). PVSCSI adapters are high-performance storage adapters that offer greater throughput and lower CPU utilization for virtual machines. They are best suited for environments in which guest applications are very I/O intensive. PVSCSI adapter extends to the storage stack performance gains associated with other paravirtual devices such as the network adapter VMXNET available in earlier versions of ESX. As with other device emulations, PVSCSI emulation improves efficiency by: Reducing the cost of virtual interrupts Batching the processing of I/O requests Batching I/O completion interrupts A further optimization, which is specific to virtual environments, reduces the number of context switches between the guest and Virtual Machine Monitor. Efficiency gains from PVSCSI can result in additional 2x CPU savings for Fibre Channel (FC), up to 3 percent CPU savings for iscsi. 5

Figure 2 Efficiency gains with PV SCSI adapter PVSCSI Efficiency of 4K Block I/s 1.2 1.8.6 LSI Logic pvscsi.4.2 S/W iscsi Protocol Fibre Channel VMware recommends that you create a primary adapter for use with a disk that will host the system software (boot disk) and a separate PVSCSI adapter for the disk that will store user data, such as a database or mailbox. The primary adapter will be the default for the guest operating system on the virtual machine. For example, for virtual machines with Microsoft Windows 28 guest operating systems, LSI Logic is the default primary adapter. iscsi Support Improvements vsphere 4 includes significant updates to the iscsi stack for both software iscsi (that is, in which the iscsi initiator runs at the ESX layer) and hardware iscsi (that is, in which ESX leverages a hardware-optimized iscsi HBA). These changes offer dramatic improvement of both performance as well as functionality of both software and hardware iscsi and delivering significant reduction of CPU overhead for software iscsi. Efficiency gains for iscsi stack can result in 7-26 percent CPU savings for read, 18-52 percent for write. Figure 3 iscsi% CPU Efficiency Gains, ESX 4 vs. ESX 3.5 iscsi % CPU Efficiency Gains, ESX 4 vs. ESX 3.5 6 5 4 3 Read Write 2 1 HW iscsi SW iscsi 6

Software iscsi and NFS Support with Jumbo Frames vsphere 4 adds support for Jumbo Frames with both NFS and iscsi storage protocols on 1Gb as well as 1Gb NICs. The 1Gb support for iscsi allows for 1x I/O throughput more details in networking section below. Improved I/O Concurrency Asynchronous I/O execution has always been a feature of ESX. However, ESX 4. has improved the concurrency of the storage stack with an I/O mode that allows vcpus in the guest to execute other tasks after initiating an I/O request while the VMkernel handles the actual physical I/O. In VMware s February 29 announcement on Oracle DB OLTP performance the gains attributed to this improved concurrency model were measured at 5 percent. Networking Enhancements Significant changes have been made to the vsphere 4 network subsystem, delivering dramatic performance improvements. VMXNET Generation 3 vsphere 4 includes, VMXNET3, the third generation of paravirtualized NIC adapter from VMware. New VMXNET3 features over previous version of Enhanced VMXNET include: MSI/MSI-X support (subject to guest operating system kernel support) Receive Side Scaling (supported in Windows 28 when explicitly enabled through the device's Advanced configuration tab) IPv6 checksum and TCP Segmentation Offloading (TSO) over IPv6 VLAN off-loading Large TX/RX ring sizes (configured from within the virtual machine) Network Stack Performance and Scalability vsphere 4 includes optimizations to the network stack that can saturate 1Gbps links for both transmit and receive side network I/O. The improvements in the VMkernel TCP/IP stack also improve both iscsi throughput as well as maximum network throughput for VMotion. vsphere 4 utilizes transmit queues to provide 3X throughput improvements in transmit performance for small packet sizes. Figure 4 Network Transmit Throughput Improvement for vsphere 4 Network Transmit Throughput Improvement 1% 8% 6% 4% 2% % 1 VM 4 VMs 8 VMs 16 VMs Gains Over ESX 3.5 vsphere 4 supports Large Receive Offload (LRO), a feature that coalesces TCP packets from the same connection to reduce CPU utilization. Using LRO with ESX provides 4 percent improvement in both throughput and CPU costs. 7

Resource Management Enhancements VMotion Performance enhancements in vsphere 4 reduce time to VMotion a VM by up to 75 percent. Storage VMotion Performance Storage VMotion is now fully supported (experimental before) and has much improved switchover time. For very I/O intensive VMs, this improvement can be 1x. Storage VMotion leverages a new and more efficient block copy mechanism called Changed Block Tracking, minimizing CPU and memory resource consumption on the ESX host up to two times. Figure 5 Decreased Storage VMotion Time Figure 6 Improved VMFS Performance 12 12 1 1 8 8 6 6 4 4 2 2 ESX 3.5 ESX 4 ESX 3.5 ESX 4 Storage VMotion Time 2 VM Provisioning Time Figure 7 Performance Enhancements Lead to a Reduced Time to VMotion Elapsed VMotion Time 6. Seconds (lower is better) 5. 4. 3. 2. 4GB ESX 3.5 4GB ESX 4 1.. During SPECjbb (ACTIVE) After SPECjbb (IDLE) 8

Figure 8 Time to Boot 512 VDI VMS 512 VM Boot Time (Fibre Channel) 25 2 15 1 5 ESX 3.5 ESX 4 512 VM Boot Storm (FCP) VM Provisioning VMFS performance improvements offer more efficient VM creation and cloning. This use case is especially important with vsphere s more ambitious role as a Cloud operating system. Performance Management Enhancements Enhanced vcenter Server Scalability As organizations adopt server virtualization at an unprecedented level, the need to manage large scale virtual data centers is growing significantly. To address this, vcenter Server, included with vsphere 4, has been enhanced to manage up to 3 hosts and 3 virtual machines. You also have the ability to link many vcenter Servers in your environment with vcenter Server Linked Mode to manage up to 1, virtual machines from a single console. vcenter Performance Charts Enhancements Performance charts in vcenter have been enhanced to provide asingle view of all performance metrics such as CPU, memory, disk, and network without navigating through multiple charts. In addition, the performance charts also include the following improvements: Aggregated charts show high-level summaries of resource distribution that is useful to identify the top consumers. Thumbnail views of hosts, resource pools, clusters, and data stores allow for easy navigation to the individual charts. Drill down capability across multiple levels in the inventory helps in isolating the root cause of performance problems quickly. Detailed data store level views show utilization by file type and unused capacity. Application Performance Oracle VMware testing has shown that running a resource-intensive OLTP benchmark, based on a non-comparable implementation of the TPC-C* workload specification, Oracle DB in an 8-vcpu VM with vsphere 4 achieved 85 percent of native performance. This workload demonstrated 8,9 database transactions per second and 6, disk input/outputs per second (IOPS). The results demonstrated in this proof point represent the most I/O-intensive application-based workload ever run in an X86 virtual environment to date. *The benchmark was a fair-use implementation of the TPC-C business model; these results are not TPC-C compliant results, and not comparable to official TPC-C results. TPC Benchmark is a trademark of the TPC. 9

Figure 9 Comparison of Oracle DB VM Throughput vs. 2-CPU Native Configuration ESX 4 Oracle DB VM Throughout, as Compared to 2-CPU Native Configuration 4.5 4 3.5 3 2.5 2 Native ESX 4 1.5 1.5 2-processor 4-processor 8-processor The results above were run on a server with only eight physical cores, resulting in an 8-way VM configuration that was not under-committing the host. The slightly less committed four vcpu configuration ran at 88 percent of native. SQL Server Running an OLTP benchmark based on a non-comparable implementation of the TPC-E* workload specification, a SQL Server virtual machine with four virtual CPUs on vsphere 4. showed 9 percent efficiency with respect to native. The SQL Server VM with a 5 GB database performed 1,5 IOPS and 5 Mb/s of network throughput. Figure 1 Comparison of vsphere 4 SQL Server VM Throughput vs. Native Configuration ESX 4 SQL Server VM Throughput, as Compared to 1 CPU Native Configuration 4 Relative Scaling Ratio 3 2 1 Native VMware VM 1 cpu 2 cpu 4 cpu SAP VMware testing demonstrated that running SAP in a VM with vsphere 4 scaled linearly from one to eight vcpus per VM and achieved 95 percent of native performance on a standard 2-tier SAP benchmark. This multi-tiered application architecture includes the SAP application tier and back-end SQL Server database instantiated in a single virtual machine. 1 *The benchmark was a fair-use implementation of the TPC-C business model; these results are not TPC-C compliant results, and not comparable to official TPC-C results. TPC Benchmark is a trademark of the TPC.

Figure 11 Comparison of ESX 4 SAP VM Throughput vs. Native Configuration ESX 4 SAP VM Throughout, as Compared to1 CPU Native Configuration 8 Relative Scaling Ratio 6 4 2 Native VMware VM 1 cpu 2 cpu 4 cpu 8 cpu Exchange Microsoft Exchange Server is one of the most demanding applications in today s datacenters, save the very largest databases being deployed. Previous work on virtual Exchange deployments showed VMware s ability to improve performance from native configurations by designing an Exchange architecture with a greater number of mailbox instances running fewer mailboxes per instance. With the performance enhancements added to vsphere 4 single VM Exchange mailboxes have been demonstrated at up to 8, mailboxes per instance. This means that Exchange administrators will have the option of choosing the higher performing smaller mailboxes or the more cheaply licensed large mailbox servers. Figure 12 vsphere performance enhancements with Microsoft Exchange ESX 4 Exchange Mailbox Count and Latency 9 3 Users (Thousands) 8 7 6 5 4 3 2 1 #VCPUs > #PCPUs 25 2 15 1 5 95 percentile latency (ms) 1 VM 2 VMs 4 VMs 6 VMs 8 VMs Users (thousands) 95 percentile latency 11

Summary VMware innovations continue to make VMware vsphere 4 the industry standard for computing in data centers of all sizes and across all industries. The numerous performance enhancements in VMware vsphere 4 enable organizations to get even more out of their virtual infrastructure and further reinforce the role of VMware as industry leader in virtualization. vsphere represents dramatic advances in performance compared to VMware Infrastructure 3 to ensure that even the most resource intensive and scale out applications such as large databases and Microsoft Exchange email systems can run on private clouds powered by vsphere. References Performance Evaluation of AMD RVI Hardware Assist http://www.vmware.com/pdf/rvi_performance.pdf Performance Evaluation of Intel EPT Hardware Assist http://www.vmware.com/pdf/perf_esx_intel-ept-eval.pdf 12

VMware, Inc. 341 Hillview Ave Palo Alto CA 9434 USA Tel 877-486-9273 Fax 65-427-51 www.vmware.com Copyright 29 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents listed at http://www.vmware.com/go/patents. VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies. VMW_9Q1_WP_vSpherePerformance_P13_R1