NEXGEN N5 PERFORMANCE IN A VIRTUALIZED ENVIRONMENT White Paper: NexGen N5 Performance in a Virtualized Environment January 2015
Contents Introduction... 2 Objective... 2 Audience... 2 NexGen N5... 2 Test Environment Solution Components... 2 Workloads... 3 Test Architecture... 4 Test Results... 6 Conclusion... 9 Appendix A: NexGen N5 Series Line-up... 11 Appendix B: Workload Details... 12 White Paper: NexGen N5 Performance in a Virtualized Environment January 2015 1
Introduction As IT virtualization continues to become more and more prevalent, the storage component of the virtualization architecture continues to play a critical role in performance and availability. The ability to benchmark and predict storage performance in a virtualized environment is critical. Objective The objective of this white paper is to provide data that will help in selecting and sizing the proper NexGen storage array for your VMware deployment. Additionally, the value of utilizing NexGen storage Quality of Service (QoS) to maximize VM density and increase ROI will be demonstrated. Audience The intended audience of this performance paper is IT planners and decision makers within medium-sized businesses and small to medium enterprises. Storage and solution architects at resellers will find this information beneficial as well. NexGen N5 The NexGen n5 Hybrid Flash Array makes performance affordable by combining memory-attached flash performance and disk capacity. Unlike other hybrids, NexGen s QoS software allows customers to provision, prioritize, and control application performance based on their business objectives. The result is the ideal balance of high-performance memory-attached flash and affordable disk capacity. Test Environment Solution Components The components within the test infrastructure include: Cisco o UCS Blade Chassis model: 5108 2x Fabric Extender IO Module (FEX): 2208XP o 2x Nexus Switches model: 5548UP o 2x Fabric Interconnect model: 6248UP o Blade Server model: B200-M3 Blade VIC: 1240 LOM Blade Mezzanine Card: Empty Blade Memory: 256GB Blade Processor: quantity 2 of Xeon E5-2650 (2GHz, 16 cores total) White Paper: NexGen N5 Performance in a Virtualized Environment January 2015 2
NexGen o NexGen N5-500 Hybrid Flash Array Virtualization Platform o VMware vsphere 5.5 with ESXi 5.1 (1065491) hosts vcenter 5.5 management platform Workloads Here at NexGen we performed independent testing utilizing IOmark-VM, which is a storage specific workload generator developed by the Evaluator group that utilizes a mix of real world and application centric workloads to test storage system performance. The data obtained and the results posted within this paper are fully owned by NexGen and have not been audited by Evaluator Group. The criteria and performance requirements are: For all application workloads: Workloads are scaled in sets of 8 sub-workloads 70% of response times for I/O s must be less than 30ms All storage must reside on the storage system under test Each combination of 21 workloads must run 1 instance of the following sub-workloads: o Clone, deploy, boot, software upgrade, VM deletion o Storage VMotion between storage volumes Figure 1: IO Profile White Paper: NexGen N5 Performance in a Virtualized Environment January 2015 3
The DVD Store workload consists of a single database server along with three web clients, each running on a different virtual machine utilizing pre-defined workload and data sets. For more details on the DVD database application see: http://linux.dell.com/dvdstore/ The Exchange workload is a Microsoft messaging and email server. Only the server portion of Exchange is recreated in this workload set, with the clients indirectly contributing to I/O via requests to the messaging server The Olio application consists of both a database server and a web client running on different virtual machines with a pre-loaded data set. For more details on Olio see: http://incubator.apache.org/olio/ There are two hypervisor workloads that are based on common operations performed in virtual infrastructure environments and require the availability of a VMware vcenter server to perform the operations Test Architecture Testing utilized a Cisco UCS/Nexus topology as shown in Figure 2: Figure 2: Solution Components White Paper: NexGen N5 Performance in a Virtualized Environment January 2015 4
We installed and configured VMware vsphere 5.5 on two servers, see Table 1 for server details. 5.7TB of storage was allocated to each ESXi host in order to allocate 21 workloads per ESXi host. Table 1 Cisco UCS Blade Detail The objective of the first test scenario was to logically allocate the 40 LUNs (11.4TB) to their appropriate QoS policies, this was done based on the workload profiles as detailed in Figure 1. By assigning performance minimums to LUNs and guaranteeing application SLAs, we can optimize the performance of the array, Table 2 and Figure 4 detail how the storage was allocated. The test cycle consists of a 30 minute window in which performance data is collected for all the workloads being executed, the resultant data being aggregated into overall response times, VM density and cost per VM. For our second test scenario, we wanted to emulate a typical storage array without QoS, which we accomplished by placing all 40 LUNs (11.4TB) into a single Mission Critical Policy. Again, a 30 minute test cycle was executed, the data collected and compared to our previous test. Table 2: Allocation of LUNs into appropriate QoS Policies White Paper: NexGen N5 Performance in a Virtualized Environment January 2015 5
Figure 4: Quality of Service LUN assignments Test Results The first test scenario which emulates assigning QoS policies to LUNs based on business/application SLAs yielded some impressive results, executing 37 concurrent workloads and delivering a density of 296 virtual machines at a cost of $378.38 per VM as detailed in Table 3. Breaking down the response times of each workload, Table 4 shows all of the average response times well below the 30ms benchmark threshold while Figure 5 summarizes the cumulative response times (of which 90% fall below the 30ms benchmark White Paper: NexGen N5 Performance in a Virtualized Environment January 2015 6
threshold). At no time during the testing did we note memory or cpu resource contention on the two B200- M3 blades, hence the reason for not utilizing additional blades during this test. A key takeaway here is that we could easily see better results by continuing to optimize the QoS LUN assignments. The benefit of the NexGen N5 integration with VMware vstorage API for VAAI was readily apparent as the vcenter operations (Clone & Deploy, Storage vmotion) were very storage intensive. Table 3: Results LUNs assigned to appropriate QoS Policies Table 4: Detailed Workload Results LUNs assigned to appropriate QoS Policies White Paper: NexGen N5 Performance in a Virtualized Environment January 2015 7
Figure 5: Cumulative Response Time Results LUNs assigned to appropriate QoS Policies Next, we tested against the same n5-500 only this time we assigned all the LUNs to a single mission critical QoS policy. In this scenario, all 11.4TB of allocated storage was competing for the 5.2TB of flash within the array simulating an array without QoS. Initial testing showed an expected degradation in the response times of the individual workloads. As such, for this test we looked only for a cumulative workload number that would result in 70% of the cumulative response times falling below the 30ms benchmark threshold. By executing 30 concurrent workloads we achieved a density of 240 virtual machines at a cost of $466.67 per VM as detailed in Table 5. Figure 6 summarizes the overall response times (of which 71% fall below the 30ms threshold). What is important to note here is the value of QoS (performance predictability and increased application density) as witnessed by the response time delta between the two test scenarios. White Paper: NexGen N5 Performance in a Virtualized Environment January 2015 8
Table 5: Results All LUNs in Single QoS Policy Figure 6: Cumulative Response Time Results All LUNs in Single QoS Policy Conclusion The NexGen N5-500 demonstrated outstanding performance at a very low cost point during our independent testing. When we analyzed the workload requirements and assigned QoS Policies to the LUNs appropriately, the result was a density of 296 virtual machines at a cost of $378.38 per VM. Extrapolating the results to realworld deployments in which clustering and HA designs would equate to a number of idle VMs, in addition to normal iterative workload analysis and subsequent QoS tuning, the NexGen n5 would yield even greater VM White Paper: NexGen N5 Performance in a Virtualized Environment January 2015 9
density and a much lower cost per VM. As a point of reference, when we removed the QoS logic, we saw performance along with VM density degrade which further corroborated the significance of NexGen storage QoS. The ability to dynamically manage performance via QoS allows customers to control what data is stored in flash and tailor their application performance to match customer business priorities. Clearly the testing delivers quantified evidence that QoS is a core enabler to value-driven data management. White Paper: NexGen N5 Performance in a Virtualized Environment January 2015 10
Appendix A: NexGen N5 Specifications White Paper: NexGen N5 Performance in a Virtualized Environment January 2015 11
Appendix B: Workload Details White Paper: NexGen N5 Performance in a Virtualized Environment January 2015 12