WHITE PAPER. Optimizing Virtual Platform Disk Performance

Similar documents
Recommendations for Aligning VMFS Partitions

Maximizing VMware ESX Performance Through Defragmentation of Guest Systems

WHITE PAPER Diskeeper: Improving the Performance of SAN Storage

Condusiv s V-locity VM Accelerates Exchange 2010 over 60% on Virtual Machines without Additional Hardware

Performance Implications of Storage I/O Control Enabled NFS Datastores in VMware vsphere 5.0

W H I T E P A P E R. Comparison of Storage Protocol Performance in VMware vsphere 4

EMC CLARiiON Backup Storage Solutions

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE

Condusiv s V-locity VM Boosts Virtual Machine Performance Over 50% Without Additional Hardware

Managing Performance Variance of Applications Using Storage I/O Control

Comparison of Storage Protocol Performance ESX Server 3.5

PAC094 Performance Tips for New Features in Workstation 5. Anne Holler Irfan Ahmad Aravind Pavuluri

Assessing performance in HP LeftHand SANs

EMC XTREMCACHE ACCELERATES MICROSOFT SQL SERVER

Virtualized SQL Server Performance and Scaling on Dell EMC XC Series Web-Scale Hyper-converged Appliances Powered by Nutanix Software

White Paper. Fixing Disk Latency and I/O Congestion to Improve Slow VMware Performance

Performance Testing December 16, 2017

Performance Report: Multiprotocol Performance Test of VMware ESX 3.5 on NetApp Storage Systems

Condusiv s V-locity Server Boosts Performance of SQL Server 2012 by 55%

NAS for Server Virtualization Dennis Chapman Senior Technical Director NetApp

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

Performance & Scalability Testing in Virtual Environment Hemant Gaidhani, Senior Technical Marketing Manager, VMware

PRESENTATION TITLE GOES HERE

EMC Business Continuity for Microsoft Applications

Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS

EMC Celerra Manager Makes Customizing Storage Pool Layouts Easy. Applied Technology

Partition Alignment Dramatically Increases System Performance

QLE10000 Series Adapter Provides Application Benefits Through I/O Caching

Emulex LPe16000B 16Gb Fibre Channel HBA Evaluation

IOmark- VM. HP MSA P2000 Test Report: VM a Test Report Date: 4, March

Optimizing Cache for Individual Workloads with the Hitachi Cache Partition Manager Feature

Virtual Server Agent for VMware VMware VADP Virtualization Architecture

Best Practices and Sizing Guidelines for Transaction Processing Applications with Microsoft SQL Server 2012 using EqualLogic PS Series Storage

Adobe Acrobat Connect Pro 7.5 and VMware ESX Server

Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS

EMC Backup and Recovery for Microsoft Exchange 2007 SP1. Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3.

VCP410 VMware vsphere Cue Cards

vsan 6.6 Performance Improvements First Published On: Last Updated On:

PARDA: Proportional Allocation of Resources for Distributed Storage Access

Benchmarking Enterprise SSDs

On BigFix Performance: Disk is King. How to get your infrastructure right the first time! Case Study: IBM Cloud Development - WW IT Services

8Gb Fibre Channel Adapter of Choice in Microsoft Hyper-V Environments

Stellar performance for a virtualized world

Best Practices for Decision Support Systems with Microsoft SQL Server 2012 using Dell EqualLogic PS Series Storage Arrays

Parallels Virtuozzo Containers

The HP 3PAR Get Virtual Guarantee Program

Identifying Performance Bottlenecks with Real- World Applications and Flash-Based Storage

IOmark- VM. HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VM- HC b Test Report Date: 27, April

White Paper Effects of the Deduplication/Compression Function in Virtual Platforms ETERNUS AF series and ETERNUS DX S4/S3 series

Disaster Recovery-to-the- Cloud Best Practices

The Impact of Disk Fragmentation on Servers. By David Chernicoff

QLogic 16Gb Gen 5 Fibre Channel for Database and Business Analytics

Performance Testing of SQL Server on Kaminario K2 Storage

IMPROVING THE PERFORMANCE, INTEGRITY, AND MANAGEABILITY OF PHYSICAL STORAGE IN DB2 DATABASES

Configuration Maximums VMware Infrastructure 3: ESX Server 3.5 Update 2, ESX Server 3i version 3.5 Update 2, VirtualCenter 2.

Video Surveillance EMC Storage with Godrej IQ Vision Ultimate

PRESERVE DATABASE PERFORMANCE WHEN RUNNING MIXED WORKLOADS

VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN

The Impact of Disk Fragmentation on Servers. By David Chernicoff

Dell EMC Unity: Data Reduction Analysis

The QLogic 8200 Series is the Adapter of Choice for Converged Data Centers

EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager

Test Report. May Executive Summary. Product Evaluation: Diskeeper Professional Edition vs. Built-in Defragmenter of Windows Vista

HP StorageWorks P4500 SAS SAN vs. Dell EqualLogic PS6000XV: Feature and Functionality Comparison

Dell PowerEdge R720xd with PERC H710P: A Balanced Configuration for Microsoft Exchange 2010 Solutions

IOmark- VDI. IBM IBM FlashSystem V9000 Test Report: VDI a Test Report Date: 5, December

Testing the effects of file fragmentation on your SAN s performance

The Virtual Desktop Infrastructure Storage Behaviors and Requirements Spencer Shepler Microsoft

NetVault Backup Client and Server Sizing Guide 2.1

Data center requirements

Microsoft SQL Server in a VMware Environment on Dell PowerEdge R810 Servers and Dell EqualLogic Storage

Doubling Performance in Amazon Web Services Cloud Using InfoScale Enterprise

VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014

Lenovo SAN Manager Rapid RAID Rebuilds and Performance Volume LUNs

EMC Integrated Infrastructure for VMware. Business Continuity

VMware Infrastructure Update 1 for Dell PowerEdge Systems. Deployment Guide. support.dell.com

Free up rack space by replacing old servers and storage

Evaluation Report: HP StoreFabric SN1000E 16Gb Fibre Channel HBA

PowerVault MD3 SSD Cache Overview

Block alignment for best performance on Nimble Storage. (Version A)

IBM Emulex 16Gb Fibre Channel HBA Evaluation

Performance Characterization of ONTAP Cloud in Azure with Application Workloads

Veritas Storage Foundation and. Sun Solaris ZFS. A performance study based on commercial workloads. August 02, 2007

EMC XTREMCACHE ACCELERATES ORACLE

V-locity Overview with Questions and Answers

How Verify System Platform Max. Performance

EMC Virtual Infrastructure for Microsoft Exchange 2007

vsan Mixed Workloads First Published On: Last Updated On:

The Modern Virtualized Data Center

EMC Backup and Recovery for Microsoft Exchange 2007

Hosted Microsoft Exchange Server 2003 Deployment Utilizing Network Appliance Storage Solutions

Deploy a High-Performance Database Solution: Cisco UCS B420 M4 Blade Server with Fusion iomemory PX600 Using Oracle Database 12c

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture

Backup and Recovery Best Practices

QLogic/Lenovo 16Gb Gen 5 Fibre Channel for Database and Business Analytics

Microsoft Office SharePoint Server 2007

HP EVA P6000 Storage performance

Chapter 6A. Describing Storage Devices. Describing Storage Devices. Types of Storage Devices. Store data when computer is off Two processes

Evaluation Report: Improving SQL Server Database Performance with Dot Hill AssuredSAN 4824 Flash Upgrades

EMC Backup and Recovery for Microsoft SQL Server

Transcription:

WHITE PAPER Optimizing Virtual Platform Disk Performance

Optimizing Virtual Platform Disk Performance 1 The intensified demand for IT network efficiency and lower operating costs has been driving the phenomenal growth of virtualization in the past decade, with no signs of slowing. At present, many corporations run more virtualized servers than physical servers. While virtualization provides opportunity for consolidation and better hardware utilization, it s critically important to recognize and never exceed hardware capacities. The importance of ensuring sufficient CPU and memory are well understood, with many processes and management tools available to help plan and properly provision VMs for these critical resources. I/O traffic, network and disk, is more complicated to account for in virtual environments as they tend to be more unpredictable. In order to better accommodate disk I/O, most virtualization platforms will implement a Storage Area Network (SAN) which can offer greater data throughput, and a dynamic environment to address fluctuations in I/O demands. While a storage infrastructure can be built out to meet expected demands, there are uncontrollable behaviors that will still impede performance. File Fragmentation As files are written to a general-purpose local disk file systems, such as Windows NTFS, a natural byproduct is file fragmentation. File fragmentation is a state in which the data stream of a file is stored in non-contiguous clusters in the file system. Fragmentation occurs on logical volume, and by device drivers is translated to logical blocks, and eventually to physical sectors residing on a storage device. It can be demonstrated as pieces of a file located in a non-contiguous manner. Fragmented files Stored in non-contiguous blocks in the guest file system

Optimizing Virtual Platform Disk Performance 2 The effect of this file fragmentation is increased I/O overhead, leading to slower system performance for the operating system. In the case of virtual platforms, a guest operating systems is stored as a file (i.e., set of files) on the virtual platforms file system as a virtual disk. A virtual disk is essentially a container file, housing all the files that constitute the OS and user data of a VM. A virtual disk file can fragment just as any other file can, resulting in what amounts to a logically fragmented virtual hard disk, which still has typical file fragmentation contained within it. The picture represented to the right would appear as VirtualServer1.vmdk, 30GB in size, in 4 pieces. Fragmented virtual disk file Stored in non-contiguous blocks in the host file system This situation equates to hierarchical fragmentation or more simply fragmentation-withinfragmentation. Given the relatively static nature and large size of virtual disks, and large allocation unit size of VMFS (typically 1MB), fragmentation of these files is unlikely to cause performance issues in most cases. The focus and solution to fragmentation should be directed at the guest operating system. Fragmentation within a Windows VM will cause Windows to generate additional unnecessary I/O. This added I/O traffic can be discovered using Windows Performance Monitor, where it is one of the principal causes for split I/O. Fragmentation prevention and defragmentation technologies exist to eliminate unnecessary I/O overhead, and improve system performance. Fragmentation prevention solves fragmentation at the source, by actively causing files to be written contiguously via advanced file system drivers. Defragmentation is the action in which file fragments are re-aligned within the file system, into a single extent, so that only the minimal amount of disk I/Os are required to access the file, thereby increasing access speed.

Optimizing Virtual Platform Disk Performance 3 Partition Alignment Depending on your storage protocol and virtual disk type, misaligned partitions can cause additional unnecessary I/O. In the example at right, in which the ESX and SAN volumes are not properly aligned, a Word file spanning four NTFS clusters causes additional unnecessary I/O in both VMFS and the SAN LUN. NTFS VMFS SAN LUN Similarities Between Partition Alignment and Fragmentation Much like misaligned partitions can cause additional I/O at multiple layers, so does fragmentation. While partitions can be properly aligned once and never require further corrective action, fragmentation will continue to occur, and needs to be regularly addressed. In the example below, which assumes proper partition alignment, a file in eight fragments in the guest OS, causes additional I/Os to be generated at the virtualization platform layer 2 and at the LUN. NTFS (64KB Cluster) VMFS (1MB Block) SAN LUN (128KB Stripe) Defragmentation in the guest operating system (of this file), eliminates excess I/O when accessing the file as Windows only generates one I/O. This reduction in I/O traffic translates to the host file system and SAN LUN, ensuring efficiencies at each layer. NTFS (64KB Cluster) VMFS (1MB Block) SAN LUN (128KB Stripe) 1 VMware guide to proper partition alignment: http://www.vmware.com/pdf/esx3_partition_align.pdf 2 It should be noted that VMFS, in the example above need only read the actual amount of data requested in multiples of 512 byte sectors, and does not need to read an entire 1MB block.

Optimizing Virtual Platform Disk Performance 4 Best Practices Defragmentation of Windows files system is a VMware-recommended performance solution. The VMware Knowledge Base article 1004004 3 states: Defragmenting a disk is required to address problems encountered with an operating system as a result of file system fragmentation. Fragmentation problems result in slow operating system performance. On order to validate the VMware statement, tests were performed. Test Environment Configuration Host OS: ESX Server 4.1 with VMFS (1MB blocks) Guest OS: Windows Server 2008r2 x64 (3GB RAM, 1 vcpu) Benchmarking Software: Iometer (http://www.iometer.org/) Fragmentation Program: FragmentFile.exe (used to fragment a specified file) Defragmentation Software: V-locity (http://www.diskeeper.com/business/v-locity/) Storage: 10GB test volume in a 40GB virtual disk. VMFS Datastore of 410GB HP Smart Array P400 controller RAID 5 (4x 136GB SCSI at 10K RPM) Stripe size of 64KB with a 64KB offset (properly aligned) 3 http://kb.vmware.com/selfservice/microsites/search.do?language=en_us&cmd=displaykc&externalid=1004004

Optimizing Virtual Platform Disk Performance 5 Load Generation The industry standard benchmarking tool iometer was used to generate I/O load for these experiments. Iometer configuration options used as variables in these experiments: Transfer request sizes: 1KB, 4KB, 8KB, 16KB, 32KB, 64KB, 72KB, and 128KB Percent random or sequential distribution: for each transfer request size, 0 percent and 100 percent random accesses were selected Percent read or write distribution: for each transfer request size, 0 percent and 100 percent read accesses were selected Iometer parameters that were held constant for all tests: Size of volume: 10GB Size of iometer test file (iobw.tst): 8,131,204 KB (~7.75GB) Number of outstanding I/O operations: 16 Runtime: 4 minutes Ramp-up time: 60 seconds Number of workers to spawn automatically: 1 The following is excerpted from a VMware white paper, 4 and helps to explain why the iometer parameters were used. Servers typically run a mix of workloads consisting of different access patterns and I/O data sizes. Within a workload there may be several data transfer sizes and more than one access pattern. There are a few applications in which access is either purely sequential or purely random. For example, database logs are written sequentially. Reading this data back during database recovery is done by means of a sequential read operation. Typically, online transaction processing (OLTP) database access is predominantly random in nature. The size of the data transfer depends on the application and is often a range rather than a single value. For Microsoft Exchange, the I/O size is generally small (from 4KB to 16KB), Microsoft SQL Server database random read and write accesses are 8KB, Oracle accesses are typically 8KB, and Lotus Domino uses 4KB. On the Windows platform, the I/O transfer size of an application can be determined using Perfmon. In summary, I/O characteristics of a workload are defined in terms of the ratio of read operations to write operations, the ratio of sequential accesses to random accesses, and the data transfer size. Often, a range of data transfer sizes may be specified instead of a single value. 4 http://www.vmware.com/pdf/esx3_partition_align.pdf

Optimizing Virtual Platform Disk Performance 6 Create Fragmentation The FragmentFile.exe tool was used to fragment the iometer test file (iobw.tst) into 568,572 fragments, a mid-range amount of fragmentation for a production server. The statistics collected from an analysis of the volume (shown below) were performed with V-locity. Statistics Volume Files Volume size 10,240 MB Cluster size 4 KB Used space 8,023 MB Free space 2,216 MB Percent free space 21 % Low-Performing files percentage % of entire volume 77 % % of used space 98 % Most Fragmented Files Fragments 568,572 File size 7,941 MB Most fragmented files \iobw.tst File fragmentation Total files 11 Average file size 724 MB Total fragmented files 1 Total excess fragments 568,572 Average fragments per file 51,689.36 Files with performance loss 1 Free Space Fragmentation Percent low-performing free space: 0% Total free space extents: 3 Largest free space extent: 911 MB Average free space extent size: 739 MB Percent free space 21 % Test Procedure The primary objective was to characterize the performance of fragmented versus defragmented virtual machines for a range of data sizes across a variety of access patterns. The data sizes selected were 1KB, 4KB, 8KB, 16KB, 32KB, 64KB, 72KB, and 128KB. The access patterns were restricted to a combination of 100 percent read or write and 100 percent random or sequential. Each of these four workloads was tested for eight data sizes, for a total of 32 data points per workload. In order to isolate the impact of fragmentation, only the test VM was powered on and active for the duration of the tests. For the initial run, iometer created a non-fragmented file, and performance data was collected. Then FragmentFile.exe tool was used to fragment the iometer test file, the VM rebooted, and the test procedure rerun. This resulted in data sets for both non-fragmented and fragmented scenarios. The results are graphed below.

Optimizing Virtual Platform Disk Performance 7 Performance Results As the graphs show, all workloads show an increase in throughput when the volume [file] is defragmented (i.e., not fragmented). It also becomes clear that as the I/O read/write size increases, the fragmentation-induced I/O latency increases dramatically. The greatest improvements of a contiguous file are found with file reads; both random and sequential.

Optimizing Virtual Platform Disk Performance 8 Conclusion Fragmentation demonstratively impedes performance of Windows guest operating systems. While the tests depicted were executed on a singular VM, the issue becomes exponentially worse in a multi-vm environment wherein each VM suffers from file fragmentation. As server virtualization establishes a symbiotic relationship, it is important to remember that generating disk I/O in one virtual machine affects I/O requests from other virtual systems. Therefore latencies in one VM will artificially inflate latency in co-located virtual machines (VMs that share a common platform). Fragmentation artificially inflates the amount of disk I/O requests which, on a virtual machine platform, compounds the disk bottleneck even more so than on conventional systems. Eliminating fragmentation in VMs, and the corresponding unnecessary disk I/O traffic, is vital to platform-wide performance and enhances the ability to host more VMs on a shared infrastructure. 2011 Diskeeper Corporation. All Rights Reserved. Diskeeper, the Diskeeper Logo and V-locity are registered trademarks of Diskeeper Corporation. All other trademarks are the property of their respective owners.