NEXGEN N5 PERFORMANCE IN A VIRTUALIZED ENVIRONMENT

Similar documents
IOmark- VM. HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VM- HC b Test Report Date: 27, April

iocontrol Reference Architecture for VMware Horizon View 1 W W W. F U S I O N I O. C O M

IOmark-VM. VMware VSAN Intel Servers + VMware VSAN Storage SW Test Report: VM-HC a Test Report Date: 16, August

IOmark- VM. HP MSA P2000 Test Report: VM a Test Report Date: 4, March

IOmark- VM. IBM IBM FlashSystem V9000 Test Report: VM a Test Report Date: 5, December

IOmark-VM. Datrium DVX Test Report: VM-HC b Test Report Date: 27, October

Pivot3 Acuity with Microsoft SQL Server Reference Architecture

NetApp HCI QoS and Mixed Workloads

Dell EMC. Vblock System 340 with VMware Horizon 6.0 with View

IOmark- VDI. IBM IBM FlashSystem V9000 Test Report: VDI a Test Report Date: 5, December

vsan Mixed Workloads First Published On: Last Updated On:

A Dell Technical White Paper Dell Virtualization Solutions Engineering

Dell EMC. VxBlock Systems for VMware NSX 6.3 Architecture Overview

Citrix VDI Scalability Testing on Cisco UCS B200 M3 server with Storage Accelerator

Dell EMC. VxBlock Systems for VMware NSX 6.2 Architecture Overview

VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN

Cisco UCS SmartStack for Microsoft SQL Server 2014 with VMware: Reference Architecture

Tenant Onboarding. Tenant Onboarding Overview. Tenant Onboarding with Virtual Data Centers

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect

Running VMware vsan Witness Appliance in VMware vcloudair First Published On: April 26, 2017 Last Updated On: April 26, 2017

Maailman paras palvelinjärjestelmä. Tommi Salli Distinguished Engineer

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture

VMware Virtual SAN. Technical Walkthrough. Massimiliano Moschini Brand Specialist VCI - vexpert VMware Inc. All rights reserved.

Eliminate the Complexity of Multiple Infrastructure Silos

Oracle Database Consolidation on FlashStack

High-Performance, High-Density VDI with Pivot3 Acuity and VMware Horizon 7. Reference Architecture

Cisco HyperFlex HX220c M4 Node

Cisco UCS-Mini Solution with StorMagic SvSAN with VMware vsphere 5.5

The next step in Software-Defined Storage with Virtual SAN

EMC Performance Optimization for VMware Enabled by EMC PowerPath/VE

Number of Hosts: 2 Uniform Hosts [yes/no]: yes Total sockets/core/threads in test: 4/32/64

Managing Performance Variance of Applications Using Storage I/O Control

Unify Virtual and Physical Networking with Cisco Virtual Interface Card

Cisco HyperFlex Hyperconverged Infrastructure Solution for SAP HANA

NexentaVSA for View. Hardware Configuration Reference nv4v-v A

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes

IBM Emulex 16Gb Fibre Channel HBA Evaluation

FAST SQL SERVER BACKUP AND RESTORE

Free up rack space by replacing old servers and storage

TPC-E testing of Microsoft SQL Server 2016 on Dell EMC PowerEdge R830 Server and Dell EMC SC9000 Storage

Using IBM FlashSystem V9000 TM with the VMware vstorage APIs for Array Integration

vstart 50 VMware vsphere Solution Specification

2014 VMware Inc. All rights reserved.

VMWare Horizon View Solution Guide

Dell EMC. VxRack System FLEX Architecture Overview

VMWare Horizon View 6 VDI Scalability Testing on Cisco 240c M4 HyperFlex Cluster System

Stellar performance for a virtualized world

Microsoft SharePoint Server 2010 Implementation on Dell Active System 800v

HPE Hyper Converged. Mohannad Daradkeh Data center and Hybrid Cloud Architect Hewlett-Packard Enterprise Saudi Arabia

Cisco UCS C240 M4 Rack Server with VMware Virtual SAN 6.0 and Horizon 6

VMware vsphere: Taking Virtualization to the Next Level

vsan 6.6 Performance Improvements First Published On: Last Updated On:

Adobe Acrobat Connect Pro 7.5 and VMware ESX Server

Driving Greater Business Outcomes with Next Gen IT. Prasad Radhakrishnan ASEAN Datacenter Computing Solutions 18 th Jan 2018

Increase Scalability for Virtual Desktops with EMC Symmetrix FAST VP and VMware VAAI

EMC Business Continuity for Microsoft Applications

Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS

LATEST INTEL TECHNOLOGIES POWER NEW PERFORMANCE LEVELS ON VMWARE VSAN

VxRAIL for the ClearPath Software Series

INTEGRATED INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNXE3300, VMWARE VSPHERE 4.1, AND VMWARE VIEW 4.5

How Architecture Design Can Lower Hyperconverged Infrastructure (HCI) Total Cost of Ownership (TCO)

2 to 4 Intel Xeon Processor E v3 Family CPUs. Up to 12 SFF Disk Drives for Appliance Model. Up to 6 TB of Main Memory (with GB LRDIMMs)

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE

VMware vsphere 5.0 STORAGE-CENTRIC FEATURES AND INTEGRATION WITH EMC VNX PLATFORMS

Nutanix Tech Note. Virtualizing Microsoft Applications on Web-Scale Infrastructure

SAN Virtuosity Fibre Channel over Ethernet

Cisco UCS B230 M2 Blade Server

Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS

The Future of Virtualization. Jeff Jennings Global Vice President Products & Solutions VMware

Scale out a 13th Generation XC Series Cluster Using 14th Generation XC Series Appliance

Administering VMware Virtual SAN. Modified on October 4, 2017 VMware vsphere 6.0 VMware vsan 6.2

Surveillance Dell EMC Storage with Digifort Enterprise

VersaStack for Data Center Design & Implementation (VDCDI) 1.0

Certified Reference Design for VMware Cloud Providers

The Future of Virtualization Desktop to the Datacentre. Raghu Raghuram Vice President Product and Solutions VMware

Microsoft Exchange, Lync, and SharePoint Server 2010 on Dell Active System 800v

VxRack FLEX Technical Deep Dive: Building Hyper-converged Solutions at Rackscale. Kiewiet Kritzinger DELL EMC CPSD Snr varchitect

Cisco HyperFlex Systems

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE

HARDWARE UMET / OCIT Private Cloud

StorMagic SvSAN: A virtual SAN made simple

VMware VAAI Integration. VMware vsphere 5.0 VAAI primitive integration and performance validation with Dell Compellent Storage Center 6.

SolidFire and Ceph Architectural Comparison

Achieving Digital Transformation: FOUR MUST-HAVES FOR A MODERN VIRTUALIZATION PLATFORM WHITE PAPER

SvSAN Data Sheet - StorMagic

What s New in VMware vsphere 5.1 Platform

NetApp HCI. Ready For Next. Enterprise-Scale Hyper Converged Infrastructure VMworld 2017 Content: Not for publication Gabriel Chapman: Sr. Mgr. - NetA

GV STRATUS Virtualized Systems. Alex Lakey November 2016

Potpuna virtualizacija od servera do desktopa. Saša Hederić Senior Systems Engineer VMware Inc.

Hyper-Convergence De-mystified. Francis O Haire Group Technology Director

Cisco HyperFlex All-Flash Systems for Oracle Real Application Clusters Reference Architecture

Overview. Cisco UCS Manager User Documentation

What s New in VMware vsphere 4.1 Performance. VMware vsphere 4.1

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini

Cisco UCS B460 M4 Blade Server

IT Infrastructure: Poised for Change

Delivering unprecedented performance, efficiency and flexibility to modernize your IT

VMware vsphere Beginner s Guide

Virtualized SQL Server Performance and Scaling on Dell EMC XC Series Web-Scale Hyper-converged Appliances Powered by Nutanix Software

Transcription:

NEXGEN N5 PERFORMANCE IN A VIRTUALIZED ENVIRONMENT White Paper: NexGen N5 Performance in a Virtualized Environment January 2015

Contents Introduction... 2 Objective... 2 Audience... 2 NexGen N5... 2 Test Environment Solution Components... 2 Workloads... 3 Test Architecture... 4 Test Results... 6 Conclusion... 9 Appendix A: NexGen N5 Series Line-up... 11 Appendix B: Workload Details... 12 White Paper: NexGen N5 Performance in a Virtualized Environment January 2015 1

Introduction As IT virtualization continues to become more and more prevalent, the storage component of the virtualization architecture continues to play a critical role in performance and availability. The ability to benchmark and predict storage performance in a virtualized environment is critical. Objective The objective of this white paper is to provide data that will help in selecting and sizing the proper NexGen storage array for your VMware deployment. Additionally, the value of utilizing NexGen storage Quality of Service (QoS) to maximize VM density and increase ROI will be demonstrated. Audience The intended audience of this performance paper is IT planners and decision makers within medium-sized businesses and small to medium enterprises. Storage and solution architects at resellers will find this information beneficial as well. NexGen N5 The NexGen n5 Hybrid Flash Array makes performance affordable by combining memory-attached flash performance and disk capacity. Unlike other hybrids, NexGen s QoS software allows customers to provision, prioritize, and control application performance based on their business objectives. The result is the ideal balance of high-performance memory-attached flash and affordable disk capacity. Test Environment Solution Components The components within the test infrastructure include: Cisco o UCS Blade Chassis model: 5108 2x Fabric Extender IO Module (FEX): 2208XP o 2x Nexus Switches model: 5548UP o 2x Fabric Interconnect model: 6248UP o Blade Server model: B200-M3 Blade VIC: 1240 LOM Blade Mezzanine Card: Empty Blade Memory: 256GB Blade Processor: quantity 2 of Xeon E5-2650 (2GHz, 16 cores total) White Paper: NexGen N5 Performance in a Virtualized Environment January 2015 2

NexGen o NexGen N5-500 Hybrid Flash Array Virtualization Platform o VMware vsphere 5.5 with ESXi 5.1 (1065491) hosts vcenter 5.5 management platform Workloads Here at NexGen we performed independent testing utilizing IOmark-VM, which is a storage specific workload generator developed by the Evaluator group that utilizes a mix of real world and application centric workloads to test storage system performance. The data obtained and the results posted within this paper are fully owned by NexGen and have not been audited by Evaluator Group. The criteria and performance requirements are: For all application workloads: Workloads are scaled in sets of 8 sub-workloads 70% of response times for I/O s must be less than 30ms All storage must reside on the storage system under test Each combination of 21 workloads must run 1 instance of the following sub-workloads: o Clone, deploy, boot, software upgrade, VM deletion o Storage VMotion between storage volumes Figure 1: IO Profile White Paper: NexGen N5 Performance in a Virtualized Environment January 2015 3

The DVD Store workload consists of a single database server along with three web clients, each running on a different virtual machine utilizing pre-defined workload and data sets. For more details on the DVD database application see: http://linux.dell.com/dvdstore/ The Exchange workload is a Microsoft messaging and email server. Only the server portion of Exchange is recreated in this workload set, with the clients indirectly contributing to I/O via requests to the messaging server The Olio application consists of both a database server and a web client running on different virtual machines with a pre-loaded data set. For more details on Olio see: http://incubator.apache.org/olio/ There are two hypervisor workloads that are based on common operations performed in virtual infrastructure environments and require the availability of a VMware vcenter server to perform the operations Test Architecture Testing utilized a Cisco UCS/Nexus topology as shown in Figure 2: Figure 2: Solution Components White Paper: NexGen N5 Performance in a Virtualized Environment January 2015 4

We installed and configured VMware vsphere 5.5 on two servers, see Table 1 for server details. 5.7TB of storage was allocated to each ESXi host in order to allocate 21 workloads per ESXi host. Table 1 Cisco UCS Blade Detail The objective of the first test scenario was to logically allocate the 40 LUNs (11.4TB) to their appropriate QoS policies, this was done based on the workload profiles as detailed in Figure 1. By assigning performance minimums to LUNs and guaranteeing application SLAs, we can optimize the performance of the array, Table 2 and Figure 4 detail how the storage was allocated. The test cycle consists of a 30 minute window in which performance data is collected for all the workloads being executed, the resultant data being aggregated into overall response times, VM density and cost per VM. For our second test scenario, we wanted to emulate a typical storage array without QoS, which we accomplished by placing all 40 LUNs (11.4TB) into a single Mission Critical Policy. Again, a 30 minute test cycle was executed, the data collected and compared to our previous test. Table 2: Allocation of LUNs into appropriate QoS Policies White Paper: NexGen N5 Performance in a Virtualized Environment January 2015 5

Figure 4: Quality of Service LUN assignments Test Results The first test scenario which emulates assigning QoS policies to LUNs based on business/application SLAs yielded some impressive results, executing 37 concurrent workloads and delivering a density of 296 virtual machines at a cost of $378.38 per VM as detailed in Table 3. Breaking down the response times of each workload, Table 4 shows all of the average response times well below the 30ms benchmark threshold while Figure 5 summarizes the cumulative response times (of which 90% fall below the 30ms benchmark White Paper: NexGen N5 Performance in a Virtualized Environment January 2015 6

threshold). At no time during the testing did we note memory or cpu resource contention on the two B200- M3 blades, hence the reason for not utilizing additional blades during this test. A key takeaway here is that we could easily see better results by continuing to optimize the QoS LUN assignments. The benefit of the NexGen N5 integration with VMware vstorage API for VAAI was readily apparent as the vcenter operations (Clone & Deploy, Storage vmotion) were very storage intensive. Table 3: Results LUNs assigned to appropriate QoS Policies Table 4: Detailed Workload Results LUNs assigned to appropriate QoS Policies White Paper: NexGen N5 Performance in a Virtualized Environment January 2015 7

Figure 5: Cumulative Response Time Results LUNs assigned to appropriate QoS Policies Next, we tested against the same n5-500 only this time we assigned all the LUNs to a single mission critical QoS policy. In this scenario, all 11.4TB of allocated storage was competing for the 5.2TB of flash within the array simulating an array without QoS. Initial testing showed an expected degradation in the response times of the individual workloads. As such, for this test we looked only for a cumulative workload number that would result in 70% of the cumulative response times falling below the 30ms benchmark threshold. By executing 30 concurrent workloads we achieved a density of 240 virtual machines at a cost of $466.67 per VM as detailed in Table 5. Figure 6 summarizes the overall response times (of which 71% fall below the 30ms threshold). What is important to note here is the value of QoS (performance predictability and increased application density) as witnessed by the response time delta between the two test scenarios. White Paper: NexGen N5 Performance in a Virtualized Environment January 2015 8

Table 5: Results All LUNs in Single QoS Policy Figure 6: Cumulative Response Time Results All LUNs in Single QoS Policy Conclusion The NexGen N5-500 demonstrated outstanding performance at a very low cost point during our independent testing. When we analyzed the workload requirements and assigned QoS Policies to the LUNs appropriately, the result was a density of 296 virtual machines at a cost of $378.38 per VM. Extrapolating the results to realworld deployments in which clustering and HA designs would equate to a number of idle VMs, in addition to normal iterative workload analysis and subsequent QoS tuning, the NexGen n5 would yield even greater VM White Paper: NexGen N5 Performance in a Virtualized Environment January 2015 9

density and a much lower cost per VM. As a point of reference, when we removed the QoS logic, we saw performance along with VM density degrade which further corroborated the significance of NexGen storage QoS. The ability to dynamically manage performance via QoS allows customers to control what data is stored in flash and tailor their application performance to match customer business priorities. Clearly the testing delivers quantified evidence that QoS is a core enabler to value-driven data management. White Paper: NexGen N5 Performance in a Virtualized Environment January 2015 10

Appendix A: NexGen N5 Specifications White Paper: NexGen N5 Performance in a Virtualized Environment January 2015 11

Appendix B: Workload Details White Paper: NexGen N5 Performance in a Virtualized Environment January 2015 12