What s New in VMware vsphere 4.1 Performance. VMware vsphere 4.1

Similar documents
W H I T E P A P E R. What s New in VMware vsphere 4: Performance Enhancements

W H I T E P A P E R. Comparison of Storage Protocol Performance in VMware vsphere 4

Eliminate the Complexity of Multiple Infrastructure Silos

Managing Performance Variance of Applications Using Storage I/O Control

Storage Considerations for VMware vcloud Director. VMware vcloud Director Version 1.0

Comparison of Storage Protocol Performance ESX Server 3.5

Performance & Scalability Testing in Virtual Environment Hemant Gaidhani, Senior Technical Marketing Manager, VMware

VMware vsphere: Install, Configure, Manage plus Optimize and Scale- V 6.5. VMware vsphere 6.5 VMware vcenter 6.5 VMware ESXi 6.

Performance Implications of Storage I/O Control Enabled NFS Datastores in VMware vsphere 5.0

Mobile Secure Desktop Implementation with Pivot3 HOW-TO GUIDE

What s New in VMware vsphere Flash Read Cache TECHNICAL MARKETING DOCUMENTATION

VMware vsphere. Using vsphere VMware Inc. All rights reserved

VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN

7 Things ISVs Must Know About Virtualization

What s New in VMware vsphere 5.1 Platform

What s New in VMware vsphere 5.1 Platform

SAN Virtuosity Fibre Channel over Ethernet

VMWARE VSPHERE FEATURE COMPARISON

Getting Started with ESXi Embedded

VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014

VMware vsphere 6.5: Install, Configure, Manage (5 Days)

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme

Configuration Maximums VMware vsphere 5.0

VMware - VMware vsphere: Install, Configure, Manage [V6.7]

VMware vsphere 4. The Best Platform for Building Cloud Infrastructures

Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS

Stellar performance for a virtualized world

Setup for Failover Clustering and Microsoft Cluster Service

VMware vsphere with ESX 4 and vcenter

REDUCE TCO AND IMPROVE BUSINESS AND OPERATIONAL EFFICIENCY

Setup for Failover Clustering and Microsoft Cluster Service

[VMICMV6.5]: VMware vsphere: Install, Configure, Manage [V6.5]

VMware vshield App Design Guide TECHNICAL WHITE PAPER

IOmark- VM. HP MSA P2000 Test Report: VM a Test Report Date: 4, March

VMware vsphere 5.0 STORAGE-CENTRIC FEATURES AND INTEGRATION WITH EMC VNX PLATFORMS

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme

vsphere Monitoring and Performance

IOmark-VM. VMware VSAN Intel Servers + VMware VSAN Storage SW Test Report: VM-HC a Test Report Date: 16, August

Microsoft SQL Server in a VMware Environment on Dell PowerEdge R810 Servers and Dell EqualLogic Storage

vsphere Networking Update 2 VMware vsphere 5.5 VMware ESXi 5.5 vcenter Server 5.5 EN

Detail the learning environment, remote access labs and course timings

Setup for Failover Clustering and Microsoft Cluster Service. 17 APR 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.7

VMware vsphere: Install, Configure, Manage (vsphere ICM 6.7)

VMware vsphere 4.0 The best platform for building cloud infrastructures

A Dell Technical White Paper Dell Virtualization Solutions Engineering

Infinio Accelerator Product Overview White Paper

Preparing Virtual Machines for Cisco APIC-EM

Preparing Virtual Machines for Cisco APIC-EM

VMware vsphere Administration Training. Course Content

VMware vcenter. Update Manager 5.0 Performance and Best Practices. Performance Study TECHNICAL WHITE PAPER

What Is New in VMware vcenter Server 4 W H I T E P A P E R

WHITE PAPER SEPTEMBER VMWARE vsphere AND vsphere WITH OPERATIONS MANAGEMENT. Licensing, Pricing and Packaging

NAS for Server Virtualization Dennis Chapman Senior Technical Director NetApp

IOmark- VM. HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VM- HC b Test Report Date: 27, April

VMware Join the Virtual Revolution! Brian McNeil VMware National Partner Business Manager

CLOUD PROVIDER POD RELEASE NOTES

Cisco HyperFlex Hyperconverged Infrastructure Solution for SAP HANA

Virtualizing SAN Connectivity with VMware Infrastructure 3 and Brocade Data Center Fabric Services

PERFORMANCE STUDY OCTOBER 2017 ORACLE MONSTER VIRTUAL MACHINE PERFORMANCE. VMware vsphere 6.5

How to Use a Tomcat Stack on vcloud to Develop Optimized Web Applications. A VMware Cloud Evaluation Reference Document

vsan Mixed Workloads First Published On: Last Updated On:

vsphere Networking Update 1 ESXi 5.1 vcenter Server 5.1 vsphere 5.1 EN

Adobe Acrobat Connect Pro 7.5 and VMware ESX Server

EMC Integrated Infrastructure for VMware. Business Continuity

VMware vshield Edge Design Guide

Introduction to Virtualization. From NDG In partnership with VMware IT Academy

Understanding Data Locality in VMware Virtual SAN

vmotion First Published On: Last Updated On:

VMware vsphere: Fast Track [V6.7] (VWVSFT)

Disaster Recovery-to-the- Cloud Best Practices

Fujitsu PRIMEFLEX for VMware vsan 20,000 User Mailbox Exchange 2016 Mailbox Resiliency Storage Solution

Microsoft Exchange Server 2010 Performance on VMware vsphere 5

VMware vcenter Server Performance and Best Practices

Virtualization with VMware ESX and VirtualCenter SMB to Enterprise

EMC Business Continuity for Microsoft Applications

Setup for Failover Clustering and Microsoft Cluster Service. Update 1 16 OCT 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.

VMware vsphere Data Protection 5.8 TECHNICAL OVERVIEW REVISED AUGUST 2014

VIRTUAL APPLIANCES. Frequently Asked Questions (FAQ)

PERFORMANCE CHARACTERIZATION OF MICROSOFT SQL SERVER USING VMWARE CLOUD ON AWS PERFORMANCE STUDY JULY 2018

VMware vsphere Customized Corporate Agenda

Exam4Tests. Latest exam questions & answers help you to pass IT exam test easily

vsan Remote Office Deployment January 09, 2018

CLOUD PROVIDER POD. for VMware. Release Notes. VMware Cloud Provider Pod January 2019 Check for additions and updates to these release notes

VMware vsphere with ESX 4.1 and vcenter 4.1

CONFIDENTLY INTEGRATE VMWARE CLOUD ON AWS WITH INTELLIGENT OPERATIONS

Performance of VMware vcenter. 5.0 in Remote Offices and Branch Offices. Performance Study TECHNICAL WHITE PAPER

EXAM - VCP5-DCV. VMware Certified Professional 5 Data Center Virtualization (VCP5-DCV) Exam. Buy Full Product.

Native vsphere Storage for Remote and Branch Offices

Administering VMware Virtual SAN. Modified on October 4, 2017 VMware vsphere 6.0 VMware vsan 6.2

vsphere Virtual Machine Administration

What s New in VMware vcloud Automation Center 5.1

CLOUD PROVIDER POD RELEASE NOTES

BraindumpsIT. BraindumpsIT - IT Certification Company provides Braindumps pdf!

VMware vrealize Suite and vcloud Suite

VMware vsphere. Administration VMware Inc. All rights reserved

VMware Exam VCP-511 VMware Certified Professional on vsphere 5 Version: 11.3 [ Total Questions: 288 ]

IOmark- VM. IBM IBM FlashSystem V9000 Test Report: VM a Test Report Date: 5, December

Configuration Maximums

EMC Performance Optimization for VMware Enabled by EMC PowerPath/VE

Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS

Transcription:

What s New in VMware vsphere 4.1 Performance VMware vsphere 4.1 T E C H N I C A L W H I T E P A P E R

Table of Contents Scalability enhancements.................................................................... 4 New resource limits............................................................................ 4 CPU enhancements......................................................................... 4 Wide VM NUMA............................................................................... 4 Memory enhancements...................................................................... 5 Memory compression.......................................................................... 5 Storage enhancements...................................................................... 7 Storage I/O control (SIOC)..................................................................... 7 8Gb Fibre Channel (FC) support................................................................ 9 NFS improvements............................................................................. 9 Software iscsi improvements................................................................... 9 Network enhancements.................................................................... 10 Transmit performance......................................................................... 10 Software and hardware large receive offload (LRO).............................................. 10 Fault Tolerance (FT) throughput............................................................... 10 VDI enhancements......................................................................... 10 VMware vmotion (vmotion) enhancements.................................................. 10 Scalable vmotion.............................................................................. 10 VMware Storage vmotion...................................................................... 11 VMware vcenter and performance management enhancements................................ 12 Storage performance monitoring............................................................... 13 Summary................................................................................. 14 References................................................................................ 14 T E C H N I C A L W H I T E P A P E R / 2

Introduction VMware vsphere 4.1 ( vsphere ), the industry s first cloud operating system, pushes further ahead the performance and scalability of the VMware virtualization platform. vsphere 4.1 enables higher consolidation ratios with unequaled performance by providing groundbreaking new memory-management technology and expanding its resource-pooling capabilities with new granular controls for storage and the network. vsphere 4.1 scales to an unmatched number of virtual machines and virtualized hosts to support the buildout of private and public clouds at even lower operational costs than before. Some of the highlights are: Compute/Performance Memory compression reclaim application performance by reducing memory contention as a bottleneck Storage Storage I/O Control set storage quality of service priorities per virtual machine for better access to storage resources for high priority applications. Performance reporting deliver new key storage performance statistics for NFS vmotion Support larger number of concurrent operations with lowered latencies and increased throughput Network Transmit performance increase virtual-machine-to-virtual-machine communication throughput by 2x VMware vcenter Scalability vsphere 4.1 fully virtualize the data center and scale at least 3x more than ever before Faster vcenter server startups Faster vsphere client connects Faster adding host operation Better vsphere client responsiveness and snappier user interaction T E C H N I C A L W H I T E P A P E R / 3

Scalability Enhancements New resource limits In vsphere 4.1, a number of scalability enhancements have been incorporated into VMware vcenter. VSPHERE 4 VSPHERE 4.1 IMPROVEMENT Virtual machines per cluster 1,280 3,000 3x Hosts per vcenter Server 300 1,000 3x Powered-on virtual machines per vcenter Server Registered virtual machines per vcenter Server 3,000 10,000 3x 4,500 15,000 >3x Concurrent vsphere clients 30 100 3x CPU Enhancements Wide VM NUMA (new feature) Modern processors from both Intel and AMD have memory local to a processor socket. Access to this memory is faster from the socket associated with the memory than from the other sockets on the system. The speed of memory access is thus nonuniform across sockets and such systems are said to have a non-uniform memory access (NUMA) architecture. In such systems a set of processors and the memory local to it is referred to as a NUMA node. VMware ESX /ESXi uses a sophisticated NUMA-aware scheduler to ensure that virtual machines with vcpus running on a NUMA node access memory that is local to that node. A virtual machine that has more virtual processors than the number of processors available on a NUMA node must span NUMA nodes. Such a virtual machine, with this release, will be intelligently NUMA-managed for optimal performance. It will now benefit from the memory locality in a manner similar to a virtual machine with a smaller vcpu count that does not span across NUMA nodes. Performance Improvement The new Wide VM NUMA feature ensures that a virtual machine with a vcpu count spanning NUMA nodes is managed by the NUMA scheduler, resulting in better memory locality for the applications within the virtual machines. This better memory locality leads to improved performance for the applications. Performance benefit depends on the workload and configuration. Internal studies based on SPECjbb, a server Java workload, show up to 7% performance improvement. T E C H N I C A L W H I T E P A P E R / 4

Memory Enhancements Memory Compression (new feature) Memory Compression, in vsphere 4.1, is a new hierarchy in VMware s memory overcommit technology, a VMware key differentiator. This new level of memory overcommit is positioned between the use of ballooning and disk swapping. In periods of memory contention on the host, right before swapping a page to the disk, we try to compress it first. This is yet another way for this technology to reclaim some performance when memory is under contention before disk swapping as a last resort. Transparent page sharing Ballooning Memory compression Disk swapping How does it work? Accessing compressed memory is faster than accessing memory swapped to disk. When a virtual page has exhausted transparent page sharing and ballooning it must be swapped to disk. In vsphere 4.1, there is an attempt first to compress the page and store it in the virtual machine s compression cache. The maximum compression cache size can be set using the Advanced Settings dialog box in the vsphere client. Memory Compression Free memory Idle memory Active memory OS Hypervisor New hierarchy in VMware s memory overcommit technology Transparent Page Sharing Ballooning Memory Compression Disk swapping Decompression is sub-millisecond compared to disk swap! Figure 1. T E C H N I C A L W H I T E P A P E R / 5

Performance Improvements The following graph depicts the relative cumulative performance (16 virtual machines) of the Oracle Swingbench workload with and without memory compression enabled. The workloads fits into memory until about 70GB. As the memory size decreases, this forces the workload to incur swapping, which is to be expected. Disk swapping is expensive, lowering workload performance. Memory compression recovers much of the performance by avoiding swapping to disk as much as possible. It is not until transparent page sharing, ballooning and all compression options are exhausted that swapping to disk is used. As a host becomes memory starved there is: Up to 15% performance improvement, with some memory overcommitment Up to 25% performance improvement, with heavy memory overcommitment 1.2 Swingbench Performance 16VMs, 80GB total VM memory (Pshare disabled) ESX 4.1 Normalized Throughput 1.0 0.99 1.0 1.0 0.99 0.8 No Mem Comp 0.6 Mem Comp 0.4 0.2 0.95 0.94 0.80 0.66 0.70 0.42 0 96 80 70 60 50 Host Memory Size (GB) Figure 2. T E C H N I C A L W H I T E P A P E R / 6

Storage Enhancements Storage I/O Control (new feature) Using Storage I/O Control (SIOC), vsphere administrators can ensure that the most important virtual machines get adequate I/O resources in times of congestion. SIOC enables cluster-wide virtual machine storage I/O prioritization. SIOC extends the concepts of CPU and memory shares and limits to handle storage I/O resources as well. SIOC now enables vsphere administrators to set these I/O shares and limits to virtual machines, and to prioritize I/O bandwidth. This would enable sustained performance for the most mission-critical applications. Solution: Storage I/O Control (SIOC) online store APP OS CPU shares: High Memory shares: High Microsoft Exchange CPU shares: High Memory shares: High data mining CPU shares: Low Memory shares: Low APP APP I/O shares: High OS OS I/O shares: High I/O shares: Low 32 GHz 16 GB Datastore A Figure 3. How does it work? When SIOC on a datastore is enabled (FC or iscsi supported), ESX/ESXi begins to monitor the device latency that hosts detect when communicating with that datastore. When device latency exceeds a default congestion threshold of 30ms sustained for at least 4 seconds, the datastore is considered to be congested. Each virtual machine that accesses that datastore is allocated I/O resources in proportion to their I/O shares as configured by the administrators per virtual machine. Storage I/O shares can be adjusted for each virtual machine based on need, ensuring that virtual machine storage allocations are done in line with virtual machine priorities. This feature enables per-datastore priorities/shares for virtual machines to improve total throughput and has cluster-level enforcement for shares for all virtual machines accessing a datastore. Configuring SIOC is a threestep process for a datastore: 1. Enable SIOC. 2. Set a congestion threshold. 3. Configure the number of storage I/O shares and upper limit of I/O operations per second (IOPS) allowed for each virtual machine. By default, all virtual machine shares are set to Normal (1,000), with unlimited IOPS. Figure 4 provides an example of how the assignments of I/O shares using SIOC results in configured prioritized access to the storage array. T E C H N I C A L W H I T E P A P E R / 7

Without Storage I/O Control Actual Disk Resources utilized by each VM are not in the correct ratio With Storage I/O Control Actual Disk Resources utilized by each are in the correct ratio even across ESX Hosts Figure 4. Performance Improvements The SIOC feature enforces proportional I/O share of virtual machines across hosts in an ESX cluster and supports both FC and iscsi SAN storage. Figure 5 compares the performance of the critical workload as the portion of the host s I/O queue allocated to the virtual machine was varied. Normalized DS2 Throughput (Orders per minute) 1.00.80.60.40.20.00 100 (Standalone) 25 (no SIOC) 45.50 66.70 87.00 % of storage resource dedicated to critical VM Figure 5. Impact of Storage I/O Control on performance of DVD Store workload In Figure 5, the blue bar represents performance of the workload in isolation. The orange bar represent the workload while four other identical loads are running simultaneously. Performance drops by 41% when the workload has to equally share resources with the other workloads running. With SIOC enabled and the workload prioritized as high, very little change in performance occurs and the high-priority workload is performing at 94% of the blue bar. T E C H N I C A L W H I T E P A P E R / 8

Storage Protocol Improvements Optimized networked storage performance is crucial for optimal virtualized datacenter performance and scalability. vsphere 4.1 has a number of key improvements. 8Gb FC storage support vsphere 4.1 now supports 8Gb FC storage arrays. ESX/ESXi can be deployed with 8Gb end-to-end FC SANs. With 8Gb support, ESX is able to effectively double the measured throughput over 4Gb with transfer size greater than 8KB, as shown in the following graph. Read I/O Throughput (MBps) 900 800 700 600 500 400 300 200 100 0 1Kb 4Kb 8Kb 16Kb 32Kb 64Kb 256Kb 4Gb 8Gb Figure 6. NFS improvements Using storage microbenchmarks, we observe that vsphere 4.1 NFS shows improvements in the range of 12 40% for Reads, and improvements in the range of 32 124% for Writes, over 10GbE. For most workloads, improvements may vary depending on the workload characteristics. Software iscsi improvements Using storage microbenchmarks, we observe that vsphere 4.1 Software iscsi shows improvements in the range of 6 23% for Reads, and improvements in the range of 8 19% for Writes, over 10GbE. For most workloads, improvements may vary depending on the workload characteristics. T E C H N I C A L W H I T E P A P E R / 9

Network Enhancements Transmit Performance In vsphere 4.1 virtual machine packet transmissions have been made asynchronous, with the ability for the asynchronous (async) transmit thread to run on any CPU core. Asynchronous transmissions enable the virtual machine to continue to make progress during the packet transmissions. In addition, allowing the async thread to run on any available CPU results in better scalability. Virtual-machine-to-virtual-machine throughput improves by as much as 2x, to up to 19 Gb/sec Virtual-machine-to-native throughput improves by 10% Software and Hardware Large Receive Offload ESX now supports hardware LRO for Linux. This allows ESX to aggregate packets before sending them to a virtual machine, which reduces per packet processing overhead in the virtual machine (does not have to be interrupted each time, networking stack processing cost amortized across multiple packets). Receive networking tests indicate 5 30% improvement in throughput and 40 60% decrease in CPU cost, depending on the workload. Fault-Tolerant Virtual Machine Throughput Improvements in the VMware Fault Tolerance (VMware FT) module as well as the networking stack has resulted in significant improvement in fault-tolerant networking throughput using 10GbE. Fault-tolerant virtual machines generating large amounts of logging traffic now can fully utilize (~9Gb/sec) the bandwidth of a 10GbE network. There is a significant 3.6x increase in fault-tolerant throughput in vsphere 4.1 (9Gb/sec) over vsphere 4.0 (at 2.5 Gb/sec) using 10GbE. Fault-tolerant virtual machines generating large amounts of logging traffic now can fully utilize the bandwidth of a 10GbE network. VDI Enhancements A number of improvements in the platform have enhanced VDI performance: 1. Memory compression improves boot-storm time (up to 40% ) under moderate memory overcommit. 2. Critical VDI operations such as Create Virtual Machine, PowerOn/Off, Un/Register and Reconfigure, improved significantly compared to vsphere 4.0.a. Create Virtual Machine is 60% faster; PowerOn is 3.4x faster. 3. Achieves significantly higher limits in vsphere 4.1 (see Configuration Maximums VMware vsphere 4.1). 4. Asynchronous network transmissions (discussed in Network Enhancements ) improves VDI performance by as much as 8%. vmotion Enhancements Scalable vmotion The number of concurrent live migrations has increased from a default of two in vcenter 4.0 to as many as eight (on a 10GbE network) in vcenter 4.1, with the same advantages of zero downtime, continuous service availability and complete transaction integrity. vmotion always leveraged performance of a 10GbE network but now scalable vmotion leverages even more to support the simultaneous migration of up to eight virtual machines. T E C H N I C A L W H I T E P A P E R / 1 0

Figure 7. Performance Improvements In vsphere 4.1, scalable vmotion improves throughput by greater than 3x (8Gb/sec) on 10GbE links, thereby improving vmotion performance and scalability. Specific optimizations in ESX 4.1 include: Reorganized for a new network I/O model (STREAMS vmotion has been optimized to maximize its transmit and receive rates to dynamically leverage available host resources, allowing hosts with available processors and memory resources to execute with higher concurrency, lower latency, and increasing bandwidth over a network faster than 1Gbps) Optimized virtual machine memory handling Optimized vmotion convergence logic (how and when vmotion exits) Quick resume (RDPI allows the guest to progress concurrently with the memory updates, thereby reducing any latency period for many workloads, especially those with high page-dirty rates memory intensive workloads, some database workloads) Storage vmotion VMware Storage vmotion is a component of VMware vsphere that provides an interface for live migration of virtual machine disk files within and across storage arrays, with no downtime or disruption in service. Storage vmotion relocates virtual machine disk files from one shared storage location to another, with zero downtime, continuous service availability and complete transaction integrity. Customers use Storage vmotion to Simplify array migrations and storage upgrades Dynamically optimize storage I/O performance Efficiently manage storage capacity Storage vmotion is fully integrated with VMware vcenter Server to provide easy migration and monitoring. T E C H N I C A L W H I T E P A P E R / 1 1

Figure 8. Performance Improvements Depending on the application workload characteristics and the size of the disk, time spent for updating the destination after cloning is reduced by up to 25%. VMware vcenter and Performance Management Enhancements A number of vcenter optimizations have resulted in significantly reduced latencies of common provisioning operations. Some operations, such as registering virtual machines and reconfiguring virtual machines, have improved by as much as 3x as compared to vsphere 4.0. 550 500 Latency of concurrent Reconfigure VMs via VC Latency (in seconds) 450 400 350 300 250 200 150 100 50 0 1 50 100 150 200 250 Number of VMs on the host 300 320 vsphere 4.0 vsphere 4.1 Figure 9. Latency of concurrent reconfigure VMs via vcenter T E C H N I C A L W H I T E P A P E R / 1 2

Latency of adding host to VC Latency (in seconds) 200 180 160 140 120 100 80 60 40 20 0 1 50 100 150 200 250 Number of powered on VMs on the host 300 320 vsphere 4.0 vsphere 4.1 Figure 10. Latency of adding host to vcenter Storage Performance Monitoring New storage performance monitoring statistics for NFS are now available in vsphere 4.1. These comprehensive host and virtual machine storage performance statistics, enabling proactive monitoring to simplify troubleshooting, heterogeneous customer storage environments supported (FC, iscsi, NFS), real-time and historical trending (vcenter), and esxtop (for ESX) and resxtop (ESXi) support. Tools support varied usage scenarios: GUI for trending and user-friendly comparative analysis Command line for scripting/drill-down at host The new additions support throughput and latency statistics for: Datastore per host Storage adapter and path per host Datastore per virtual machine VMDK per virtual machine INVENTORY OBJECT PER COMPONENT PROTOCOL SUPPORT VCENTER ESXTOP Host Datastore FC/NFS/iSCSI Yes Yes Storage adapter FC Yes Yes Storage Path FC Yes Yes LUN FC/iSCSI Yes Yes Virtual machine Datastore FC/NFS/iSCSI Yes Yes VMDK FC/NFS/iSCSI Yes Yes New Throughput and Latency Metrics T E C H N I C A L W H I T E P A P E R / 1 3

Summary VMware innovations continue to ensure that VMware vsphere 4.1 pushes the envelope of performance and scalability. The numerous performance enhancements in vsphere 4.1 enable organizations to get even more out of their virtual infrastructure and further reinforce the role of VMware as industry leader in virtualization. vsphere 4.1 represents advances in performance, to ensure that even the most resource-intensive applications, such as large databases and Microsoft Exchange email systems, can run on private clouds powered by vsphere. References VMware Storage vmotion www.vmware.com/files/pdf/vmware-storage-vmotion-ds-en.pdf VMware vmotion www.vmware.com/files/pdf/vmware-vmotion-ds-en.pdf vsphere Resource Management Guide www.vmware.com/pdf/vsp_41_resource_mgmt.pdf Managing Performance Variance of Applications using Storage IO Control www.vmware.com/go/tp-managing-performance-variance Configuration Maximums VMware vsphere 4.1 www.vmware.com/pdf/vphere4/r41/vsp_41_config_max.pdf VMware, Inc. 3401 Hillview Avenue Palo Alto CA 94304 USA Tel 877-486-9273 Fax 650-427-5001 www.vmware.com Copyright 2010 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents listed at http://www.vmware.com/go/patents. VMware is a registered trademark or trademark of VMware, Inc., in the United States and/or other jurisdictions. All other marks and names mentioned herein might be trademarks of their respective companies. Item No: VMW_10Q3_WP_vSphere_4_1_PERFORMANCE_p14_A_R2.