Active Flash Performance for Hitachi Virtual Storage Platform Gx00 Models. By Hitachi Data Systems

Size: px
Start display at page:

Download "Active Flash Performance for Hitachi Virtual Storage Platform Gx00 Models. By Hitachi Data Systems"

Transcription

1 Active Flash Performance for Hitachi Virtual Storage Platform Gx00 Models By Hitachi Data Systems March 2016

2 Contents Executive Summary... 3 Notices and Disclaimer... 4 Purpose of This Testing... 6 Active Flash Architecture Summary... 6 Prompt Promotion... Error! Bookmark not defined. High-Priority Demotion... 7 Virtual Storage Platform Gx00 Systems Configuration and Implementation... 8 Test Methodology... 8 Test Results Summary... 8 Test Case One: Skewed 80% Random Read, 20% Random Write Workload With Moving Random Read Hot Band... 8 Test Case Two: Random, Skewed, 80/20 Read/Write Workload With Random Read Hot Band Moving Across DPVOLs at 10-Minute Intervals... 9 Test Case Three: Random Read Hot Band Moving Across DPVOLs at 10-Minute Intervals With Tier 1 Full Test Case Four: Random Write Hot Band Moving Across DPVOLs at 10-Minute Intervals With Tier 1 Full Test Case Five: Skewed, 80% Random Read, 20% Random Write Workload, Followed by Reads From Tier 2 or Tier "Rules of Thumb for Best Performance Conclusions Appendix A: Vdbench Parameter Files Test Case One: Skewed 80% Random Read, 20% Random Write Workload With Moving Random Read Hot Band Test Case Two: Random, Skewed, 80/20 Read/Write Workload With Random Read Hot Band Moving Across DPVOLs at 10-Minute Intervals Test Case Three: Random Read Hot Band Moving Across DPVOLs at 10-Minute Intervals With Tier 1 Full Test Case Four: Random Write Hot Band Moving Across DPVOLs at 10-Minute Intervals With Tier 1 Full Test Case Five: Skewed, 80% Random Read, 20% Random Write Workload, Followed by Reads From Tier 2 or Tier

3 Executive Summary Purpose of Testing The purpose of this testing was to determine whether the active flash feature of Hitachi Dynamic Tiering (HDT) can improve the performance of a Hitachi Virtual Storage Platform (VSP) Gx00 system workload in which the I/O frequency distribution changes rapidly. Without the active flash feature, HDT can t immediately promote pages that suddenly become busy. At a minimum, HDT must wait until the end of the current monitoring cycle to take action on recent workload changes. Active flash feature was designed to allow HDT to improve performance more quickly by reacting to workload changes in seconds rather than minutes or hours. This testing was designed to determine whether active flash achieves that objective. A secondary objective of this testing was to measure the processor cycles consumed by the active flash feature, compared to HDT without active flash. Noteworthy Observations There are a few noteworthy observations regarding this testing: Active flash worked as expected in these tests. Workloads with frequent or abrupt changes in the address range of frequently accessed pages performed better with active flash enabled. Prompt promotion and high-priority demotion were both confirmed in the HDT relocation log when active flash was enabled. It was also observed that active flash activity began to work immediately in response to workload changes, without waiting for the end of the current HDT monitoring cycle. Active flash is designed to promote suddenly busy pages promptly, and, when necessary, to demote inactive pages quickly. Therefore, the total number of pages relocated may increase substantially when active flash is enabled. Increased relocation may become counterproductive if pages with a long-term history of frequent access are prematurely demoted. It s possible that workloads with repeating patterns and/or gradual shifts in the I/O frequency distribution will get better long-term results with active flash disabled. On VSP Gx00 systems, in a test designed to drive high relocation rates, the additional CPU cycles consumed by active flash were substantial. Processor busy rates across 16 cores were up to 31 points higher with active flash enabled than with active flash disabled. On the other hand, average IOPS on the same test with active flash enabled were up to 120% higher than HDT alone, so the extra processor cycles consumed by active flash were used productively. Monitor VSP Gx00 processor busy rates to be sure that recommended thresholds (40-45% busy) are not exceeded too frequently. Both prompt promotion and normal HDT promotion work more slowly or not at all when cache write pending levels reach 70%. Active flash has not significantly changed HDT s ability to relieve a bottleneck on a lower tier, for the following reasons: o o o With or without active flash, HDT won t promote pages from lower tiers simply because the lower tier is overloaded. A lower tier can become overloaded yet still have few or no pages that meet the criteria for promotion. Overloading a lower tier with random writes is likely to lead to a persistent overload condition, since HDT doesn t count parity operations. This is particularly true when the lower tier is configured in RAID-6. Persistent tier overload indicates that some kind of reconfiguration is needed. Possibilities include tier expansion, a change in RAID type (for example, RAID-1+0 for random writes), implementation of tiering policy, and so on. 3

4 Notices and Disclaimer Copyright 2016 Hitachi Data Systems Corporation. All rights reserved. The performance data contained herein was obtained in a controlled isolated environment. Actual results that may be obtained in other operating environments may vary significantly. While Hitachi Data Systems Corporation has reviewed each item for accuracy in a specific situation, there is no guarantee that the same results can be obtained elsewhere. All designs, specifications, statements, information and recommendations (collectively, "designs") in this manual are presented "AS IS," with all faults. Hitachi Data Systems Corporation and its suppliers disclaim all warranties, including without limitation, the warranty of merchantability, fitness for a particular purpose and non-infringement or arising from a course of dealing, usage or trade practice. In no event shall Hitachi Data Systems Corporation or its suppliers be liable for any indirect, special, consequential or incidental damages, including without limitation, lost profit or loss or damage to data arising out of the use or inability to use the designs, even if Hitachi Data Systems Corporation or its suppliers have been advised of the possibility of such damages. Other company, product or service names may be trademarks or service marks of others. This document has been reviewed for accuracy as of the date of initial publication. Hitachi Data Systems Corporation may make improvements and/or changes in product and/or programs at any time without notice. No part of this document may be reproduced or transmitted without written approval from Hitachi Data Systems Corporation. 4

5 Document Revision Level Revision Date Description 1.0 March 2016 Initial Release Contributors The information included in this document represents the expertise, feedback and suggestions of these skilled practitioners:! Charles Lofton, Master Performance Consultant, HDS Solutions Engineering and Technical Operations! Anahad Dhillon, Global Virtualization Product Manager, HDS Product Management 5

6 Purpose of Testing The purpose of this testing was to determine whether the active flash feature of Hitachi Dynamic Tiering (HDT) for Hitachi Virtual Storage Platform (VSP) Gx00 systems can improve the performance of a workload in which the I/O frequency distribution changes rapidly. Without the active flash feature, HDT can t immediately promote pages that suddenly become busy. At a minimum, HDT must wait until the end of the current monitoring cycle to take action on recent workload changes. The active flash feature was designed to allow HDT to improve performance more quickly by reacting to workload changes in seconds rather than minutes or hours. This testing was designed to determine whether active flash achieves that objective. A secondary objective of this testing was to measure the processor cycles consumed by the active flash feature, compared to HDT without active flash. The detailed performance results achieved during the active flash tests are presented in the tables and charts in the Test Results Summary section of this report. While we attempt to profile a variety of application characteristics, no benchmark can replicate a real world application as well as the actual applications themselves. Active Flash Architecture Summary Active flash is an extension of Hitachi Dynamic Tiering, which in turn is built upon Hitachi Dynamic Provisioning. For details on these products, see the Hitachi Virtual Storage Platform G1000 Provisioning Guide for Open Systems. The active flash feature of Hitachi Dynamic Tiering monitors page access frequency in real time and immediately promotes pages that suddenly become busy from slower media to high-performance flash media. The active flash feature can be enabled on any HDT pool as long as Tier 1 of the Dynamic Tiering pool is composed of solid state drives (SSDs) or flash module drives (FMDs). No special configuration beyond what is needed for HDT is required. Prompt Promotion A primary goal of HDT and active flash is to have the most frequently accessed pages in Tier 1. As the workload varies in both the frequency of access and the type of access, the threshold for moving pages from one tier to another changes. When active flash is enabled, HDT generates a dynamic tier range value in addition to its long-term tier range values. The new, dynamic tier range value, called the prompt promotion threshold, is used to determine which pages should be promoted to Tier 1 as soon as possible, without waiting for the normal HDT relocation cycle. The active flash feature compares the recent access frequency of each page to the prompt promotion threshold to determine whether a page should be moved to flash media. The prompt promotion threshold is a dynamic threshold that adjusts based upon changes in workload to make most efficient use of the SSD or FMD drives. If the recent access frequency for a page meets or exceeds the prompt promotion threshold, the page is relocated to Tier 1 without waiting for the next HDT relocation cycle. The minimum prompt promotion threshold is defined as Tier 1 lower range x M (default = 5). So for example, if the long-term HDT lower range for Tier 1 is 10, then the minimum prompt promotion threshold for Tier 1 would be 10 x 5 = 50. As prompt promotion moves pages to Tier 1, the prompt promotion threshold is raised dynamically in proportion to the number of pages promoted by active flash. Therefore, when many pages have been promptly promoted recently, the prompt promotion threshold would be raised to better balance Tier 1 usage between short-term and long-term busy pages. In addition to the HDT counter used to track long-term access frequency for each page, active flash needs a new counter to track recent access frequency. This new counter is defined as 64 / (current time count start time) in seconds. For example, if a page receives 64 I/Os in 500 milliseconds, its recent access frequency is 64 / 0.5 = 128. As soon as a page receives 64 I/Os, its recent access counter is updated and compared to the prompt promotion threshold, to determine whether the page should be immediately promoted to Tier 1. After the recent access counter is checked, it is reset, regardless of whether the page is promoted or not. Certain types of I/O (random reads in particular) benefit more from being served by flash media then others. This is because while writes and sequential reads get a large performance benefit from VSP Gx00 cache, cache-miss random reads require the host to wait for back-end disk to respond. Therefore, random reads get the most benefit from placement 6

7 on fast disk. To achieve the best performance gains for random reads, HDT with active flash gives random read I/O greater weight than write I/O when calculating the total access frequency for a page. Because active flash will usually increase the amount of I/O received by Tier 1, active flash also considers Tier 1 media wear levels in weighting read versus write I/O. As wear levels on Tier 1 drives increase, active flash will further reduce the weight given to write I/O in calculating page access frequency. This should help to prolong the life of Tier 1 media without significant performance penalty, since random reads benefit most from placement on fast drives. Finally, to help prevent thrashing (excessive relocation), pages promoted via prompt promotion are not subject to normal HDT demotion for the remainder of the current HDT monitoring cycle. However, such pages could be subject to highpriority demotion. High-Priority Demotion In order to be certain that there is always some room for active flash to do prompt promotion of pages to Tier 1, highpriority demotion is used to demote low-activity pages out of Tier 1. Tier 1 pages that have the lowest long-term and recent I/O activity are candidates for high-priority demotion. Similar to prompt promotion, high-priority demotion does not wait for the end of the current HDT cycle to make relocation decisions. High-priority demotion is triggered when Tier 1 free capacity is depleted, or when Tier 1 performance utilization reaches 80%. Tier 1 free capacity is defined as the space available in Tier 1 minus the reserved areas set aside for relocation and new page allocation. If there is no space available in Tier 1 aside from the reserved areas, then high-priority demotion may be triggered. High-priority demotion will also be triggered if Tier 1 performance utilization reaches 80%. Performance utilization of a tier is defined as the amount of I/O received by a tier divided by the maximum amount of I/O it can sustain. HDT estimates the maximum amount of I/O a tier can sustain based on drive type and RAID type. A performance utilization of 100% means that the tier is receiving the maximum amount of I/O it can sustain. When performance utilization reaches about the 60% level, drive response time may become noticeably slower. Flash media are an exception to this general rule. Flash drives can deliver fast response times even when 100% busy. However, in order to give the best possible performance characteristics, HDT with active flash will trigger high-priority demotion when Tier 1 performance utilization reaches 80%. When active flash is enabled, Tier 1 performance utilization is calculated more frequently than it is with standard HDT. The standard HDT method bases performance utilization on the amount of I/O received by the tier in the most recent monitoring cycle. With active flash enabled, Tier 1 performance utilization is based on much more frequent sampling (for example: I/O received in the last 60 seconds or maximum sustainable I/O in 60 seconds. 1 1 The details of the method used to calculate real-time performance utilization for active flash were not available when this paper was written. 7

8 Virtual Storage Platform Gx00 Systems Configuration and Implementation A single Hitachi Virtual Storage Platform G600 storage system was used for this testing. It was configured with 256GB of cache, with all tests performed in a 64GB cache logical partition (CLPR). LDEV ownership was distributed across four microprocessor units (MPUs). Sixteen 16Gb Fibre Channel host ports distributed across four front-end director pairs were used. Host connections were established via 8Gb Brocade fabric. Performance monitor data collection was enabled, and all license keys were installed. The VSP G600 HDT pool was configured with relocation pace five (fastest) and a one-hour monitoring cycle. The default tiering policy All was used, and the new page allocation tier for each DPVOL was set to Tier 2. Except where specifically noted, the default buffer sizes for new page allocation and relocation were used. Tier 1 was composed of two RAID-6 (6D+2P) parity groups consisting of 1.6TB FMDs. A single 2,447GB pool volume was allocated on each Tier 1 PG. Tier 2 was composed of 12 RAID-6 (6D+2P) parity groups consisting of 300GB 15K RPM serial-attached SCSI (SAS) hard disk drives (HDDs). A single 1,610GB pool volume was allocated on each Tier 2 parity group. Tier 3 was composed of eight RAID-6 (6D+2P) parity groups consisting of 4TB 7.2K RPM SAS HDDs. A single 3,072GB pool volume was allocated on each Tier 3 parity group. Thirty-six 1TB DPVOLs were created, with DPVOL ownership distributed across all MPUs. All tests were conducted with the DPVOLs in a 64GB CLPR to reduce the cache hit rate. Four Hitachi Compute Blade 2000 servers with CBX55A2 blades were used, each configured with 2 x 3.47 GHz Intel Xeon X5690 Processors (6 cores, 12 threads), 64GB memory and 2 x Emulex LPe Gb/sec Fibre Channel dual port host bus adapters (HBAs), for four paths from each blade (a total of 16 paths). The operating system used was RHEL 6.4 (x64). The vdbench benchmark tool (v50404) was used on these clients to drive the tests using raw volumes (no file systems). Test Methodology All tests were conducted with the vdbench 5.04 benchmark tool. A skewed workload is needed for a useful HDT test. In simple terms, a skewed workload will place heavy I/O on some pages, moderate I/O on some pages, and little or no I/O other pages. In these tests, workload skew was achieved with the vdbench skew and range parameters. The skew parameter directs vdbench to allocate a specified percentage of total I/O to a particular workload component. The range parameter directs vdbench to read or write to a specified logical block address range of one or more storage devices. A parameter file was created on blade1 (acting as the master server) for vdbench to drive the workload on all four CB 2000 blades against the raw DPVOLs. Unless otherwise noted, the workload consisted of 80% random reads, and 20% random writes, with a mixture of block sizes from 4K to 64K (average 49K). In most cases, tests were initiated with Tier 1 empty, allowing performance improvement to be measured as the most active pages were promoted. In some cases, workload skew was adjusted and a followup test was initiated with Tier 1 at full capacity. The followup tests required Tier 1 pages to be demoted to make room in Tier 1 for newly active pages. Test durations ranged from 2 to 24 hours. In all cases, DPVOLs were prefilled to remove new page allocation as a variable in the tests. Test Results Summary Test Case One: Skewed 80% Random Read, 20% Random Write Workload With Moving Random Read Hot Band The first test on VSP G600 used a skewed, 80% read, 20% write, 100% random workload with an average block size of 49K. The I/O rate was controlled by the vdbench performance curve function. Vdbench performance curve begins by running with a maximum (uncontrolled) I/O rate. After a specified interval (for example, 20 minutes) vdbench then runs a specified number of iterations of the test workload at fixed percentages of the observed maximum. So, for example, if 5,000 IOPS was observed on the first (uncontrolled) iteration, then the next iteration would run a fixed I/O rate based on a specified percentage of 5,000 IOPS. The performance curve is useful in HDT testing for several reasons. First, the initial 8

9 uncontrolled I/O rate establishes how many IOPS the pool can deliver with the current (un-optimized) page distribution. Subsequent (controlled) I/O rates result in lower levels of resource utilization (for example, processor busy), which allows HDT to relocate pages more efficiently. The entire performance curve can be repeated several times, until performance reaches a plateau, indicating that the page distribution is optimal for the workload and configuration. In this first test case, the performance curve was run four times, with I/O rates in 10% increments between 10% and 110% of the observed maximum. The test interval was 20 minutes; therefore, the total test duration was 20 x 12 x 4 = 960 minutes or 16 hours. The full 16-hour test was first run on an HDT pool with active flash disabled. Then, Tier 1 was emptied and the HDT monitor data discarded by shrinking the Tier 1 pool volumes out of the pool. Next, new Tier 1 pool volumes were added, and the test was repeated with active flash enabled. In test case one, the address range of the pages receiving the majority of the random reads was changed every four hours. The random read shift was repeated three times after the initial four-hour test interval, so the total test duration was 16 hours. HDT used a one-hour monitoring cycle for all tests, so changing the workload every four hours made for an interesting test case, since it allowed enough time for HDT to react to the workload changes even with active flash disabled. However, HDT was also configured with continuous monitoring, so as the test progressed, HDT without active flash gave relatively more weight to long-term trends and did not react as quickly to workload changes as it did when active flash was enabled. Table 1 below presents the maximum I/O rate observed on each of the four vdbench performance curve iterations, comparing performance with active flash disabled versus active flash enabled. As expected, active flash improved performance more quickly than HDT alone, despite achieving lower IOPS during the first (baseline) test iteration. On the second maximum I/O rate iteration, HDT with active flash improved IOPS by 55% compared to the first test iteration, while HDT without active flash improved IOPS by 10%. HDT with active flash again outpaced HDT without active flash on the third test iteration. IOPS dropped slightly both with and without active flash during the fourth test iteration, probably due to physical layer issues in the test environment. Also, note that while active flash response time was higher than standalone HDT response time in the first three test iterations, active flash IOPS were also significantly higher in all but the first test iteration. Active flash also improved response time with each test iteration, finally achieving a lower response time than standalone HDT on the last test run. Overall, as expected, HDT with active flash performed better than HDT without active flash in test case one. Table 1. HDT Performance Improvement With Active Flash Disabled Versus Active Flash Enabled Hitachi Virtual Storage Platform G600, Hitachi Dynamic Tiering IOPS Improvement, Max I/O Rate, 80% Read / 20% Write, 100% Random, 49K Avg Block Size, Moving Random Read Hot Band, 64GB CLPR Iteration Active Flash off IOPS % Change Active Flash on IOPS % Change Active Flash off RT Active Flash on RT 1 8,944 n/a 6,739 n/a , % 12, % , % 14, % , % 13, % Average 10,163 n/a 11,849 n/a Test Case Two: Random, Skewed, 80/20 Read/Write Workload With Random Read Hot Band Moving Across DPVOLs at 10-Minute Intervals The workload used in the second test case had the following elements in common with test case one: 100% random. 80% read, 20% write. 9

10 Average block size 49K. Skewed so that a small area received 73% of the random reads (58% of total I/O). The test was initiated with Tier 1 empty: once with active flash enabled, and once with active flash disabled. Test case two differed from the first test in several important respects. First, the area receiving 73% of the random reads (58% of total I/O) was reduced to 10% of the pages on a single DPVOL (about 100GB). Second, the aforementioned hot band of random reads was changed to a different DPVOL every 10 minutes, with the test repeated 36 times (one iteration for each DPVOL in the pool). Therefore, the planned test duration was 36 x 10 = 360 minutes or six hours. Finally, rather than controlling the I/O rate with vdbench performance curve, a fixed I/O rate of 10,000 IOPS was used to control the test. It was expected that HDT without active flash would not be able to improve performance substantially with this workload, since the address range of the busiest pages changed too frequently. However, with active flash enabled, HDT should be able to improve performance by promoting the suddenly busy pages without waiting for the end of the next monitoring cycle. Figure 1 shows the average I/O rate achieved at the end of each 10-minute test interval. In 11 of 13 intervals in which prompt promotion occurred, HDT with active flash delivered from 35% to 125% more IOPS than HDT alone. In the other two iterations in which prompt promotion was observed, HDT with active flash delivered about the same IOPS as HDT without active flash. In the two iterations during which active flash delivered about the same IOPS as HDT alone at the end of the test interval, it was because active flash started from a relatively low level of baseline performance, and then improved IOPS via prompt promotion until IOPS at the end of the 10-minute test interval were about the same as standalone HDT. During test iteration 16 with active flash enabled, vdbench aborted the test due to an I/O error. Since there were no anomalies in the results from the first 15 iterations with active flash enabled, it was decided to move on to new test cases rather than repeating test case two. 10

11 Figure 1. VSP G600 HDT With Active Flash Performance Improvement in Test Case Two Test case two confirmed that HDT with active flash outperforms HDT alone when the location of the busiest pages changes frequently. In 11 of 13 intervals during which prompt promotion occurred, HDT with active flash delivered from 35% to 125% more IOPS than HDT alone delivered with the same workload (see green columns in Figure 1). Another goal of test case two was to measure the additional processor cycles consumed by active flash. As expected, processor busy rates were higher with active flash enabled than they were with active flash disabled. The differences in processor busy rates were quite significant. For the intervals sampled, the 16 cores ranged from 15% to 31% busier with active flash enabled than they were with active flash disabled. For example, during one of the busiest intervals, the 16 cores averaged 22.4% busy with active flash off, versus 45.6% busy with active flash on. While the increase in processor busy rates was higher than expected, it was reasonable in view of the higher IOPS delivered by VSP G600 with active flash enabled. It appears that the additional processor cycles consumed by active flash were accomplishing useful work. 11

12 Test Case Three: Random Read Hot Band Moving Across DPVOLs at 10-Minute Intervals With Tier 1 Full For the third VSP G600 active flash test, the skewed, 100% random, 80% read, 20% write workload from test case one was run until Tier 1 was full except for the 2% relocation buffer. The initial workload was skewed so that pages in the lowest 10% of DPVOL s logical block address range received 73% of the random reads. The pages mapped to this 10% hot band were most likely to be promoted to Tier 1. Next, after Tier 1 was full, the random read hot band was run across each DPVOL individually at 10-minute intervals, while the remainder of the workload was left unchanged. Moreover, the address range of the random read hot band was shifted to an area that had not previously received much I/O, which guaranteed that the suddenly active pages would reside on Tier 2 or Tier 3. And because Tier 1 was full at the beginning of the test, high-priority demotion would be required to make space available in Tier 1 for suddenly active pages. The HDT monitoring cycle was set to one hour, and since the hot spot shifted every 10 minutes, the workload changes were too quick to allow HDT without active flash to improve performance. But if active flash worked as designed, performance of a rapidly moving hot spot could be improved via prompt promotion and high-priority demotion. Figure 2 shows the performance improvement observed as the random read hot band moved across the DPVOLs at 10- minute intervals. A fixed I/O rate of 10,000 IOPS was used to control the test. The blue column shows average IOPS at the beginning of each 10-minute interval, and the red column shows average IOPS at the end of each 10-minute interval. In each case, final IOPS were higher than initial IOPS, sometimes by more than 100%. (Also, note that the cache hit rate was kept low by running all tests in a 64GB CLPR). Figure 2. VSP G600 Running HDT With Active Flash Performance Improvement in Test Case Three 12

13 Figure 3 shows planned versus real-time relocation during test case three. Note that prompt promotion (blue area) and high-priority demotion (brown area) moved thousands of pages in response to the rapidly changing workload. Test case three confirmed that active flash can improve the performance of a workload where the address range of the busiest pages changes quickly, by using prompt promotion to move the suddenly busy pages to Tier 1. Also, test case three confirmed that even when Tier 1 is full, active flash makes space available for suddenly busy pages via high-priority demotion. Figure 3. HDT With Active Flash Relocation Between Tier 1 and Tier 2 in Test Case Three 13

14 Test Case Four: Random Write Hot Band Moving Across DPVOLs at 10-Minute Intervals With Tier 1 Full The workload for test case four was similar to test case three, but the moving hot band comprising 58% of total I/O was changed from 100% random read to 100% random write. As a result, the overall read/write mix changed to 22% random read; 20% random write with static address range, and 58% random write with moving address range. A fixed I/O rate of 10,000 IOPS was used to control the test. The test was run without reinitializing the pool after test case three, so Tier 1 was full at the beginning of the test. In addition, the address range of the pages in the moving random write hot band was changed to an area that had previously received very little I/O, so most of the suddenly hot pages would reside on Tier 3. As in test case three, the hot band was focused on each individual DPVOL in the pool at 10-minute intervals. And, since Tier 1 was full at the beginning of test case four, both prompt promotion and high-priority demotion would be needed to improve the performance of the rapidly changing workload. Figure 4 shows the IOPS achieved during a typical 10-minute interval in test case four. As shown in Figure 4, a test interval typically began with relatively high IOPS, but the I/O rate quickly declined as cache write pending filled to 70% on the MPU that owned the DPVOL receiving the heavy random write workload. Additional spikes in the I/O rate occurred when write pending emptied enough to complete more writes at cache speed. Most intervals appeared to show a small improvement in the average I/O rate, but the average improvement (if any) was much smaller than what occurred in test case three (moving random read hot band). It was expected that HDT with active flash would have difficulty improving the performance of a random write workload in which most of the writes were directed to Tier 3. Performance monitor confirmed that the Tier 3 parity groups were 100% busy throughout test case four. All tiers were configured in RAID-6, and RAID-6 devices require six back-end operations for each host random write. However, since HDT does not count parity operations, a random write workload can easily cause tier overload without allowing many pages to reach the prompt promotion threshold. Figure 5 compares Tier 3 to Tier 1 relocation rates in test case three (moving random read hot band) versus test case four (moving random write hot band). The random read hot band (red and blue areas) resulted in at least three times more promotion from Tier 3 to Tier 1 as the random write hot band (brown and gray areas) even though Tier 3 was more heavily loaded by the random writes in test case four. We can draw several conclusions about HDT with active flash performance from test case four: With or without active flash, HDT won t promote pages from lower tiers simply because the lower tier is overloaded. A lower tier can become overloaded yet still have few or no pages that meet the criteria for promotion. Overloading a lower tier with random writes is likely to lead to a persistent overload condition, since HDT doesn t count parity operations. This is particularly true when the lower tier is configured in RAID-6. Persistent tier overload indicates that some kind of reconfiguration is needed. Possibilities include tier expansion, a change in RAID type (for example, RAID-1+0 for random writes), implementation of tiering policy, and so on. 14

15 Figure 4. HDT With Active Flash Performance in Test Case Four During a Typical 10-Minute Test Iteration 15

16 Figure 5. HDT With Active Flash Tier 3 to Tier 1 Promotion Comparison Test Case Five. Skewed, 80% Random Read, 20% Random Write Workload, Followed by Reads From Tier 2 or Tier 3 The workload used in the fifth test case had the following elements in common with several other test cases: 100% random. 80% read, 20% write. Average block size 49K. Skewed so that 10% of the pages received 73% of the random reads (58% of total I/O). Unlike previous tests, which used a variable I/O rate controlled by vdbench performance curve, the above workload was run with a fixed I/O rate of 10,000 IOPS for three hours. Tier 1 was empty when the test started. The second, two-hour test phase placed a skewed, 100% random read workload on eight DPVOLs (two from each MPU), using an address range that had not previously received much I/O. Therefore, most of the reads in the second test phase came from Tier 3 or Tier 2. Phase two used a maximum (uncontrolled) I/O rate to make the test more challenging. Figure 6 shows the IOPS attained during the first, three-hour test phase. As shown in Figure 6, gradual, uneven improvement in IOPS began about halfway through the test. Progress was slower than previous tests that used a variable I/O rate for two primary reasons. First, even though the workload consisted of only 20% random writes, the constant I/O rate of 10,000 IOPS pushed cache write pending (CWP) to a level that inhibited promotion. The only DPVOLs that had a significant number of pages promoted in phase one were those owned by MPU-21, where CWP remained relatively low 16

17 (see Figure 7 and Figure 10). In addition, Tier 3 was overloaded by the random write workload, which was a drag on performance throughout phase one of test case five. PG busy on the four Tier 3 PGs remained at 100% during for the entirety of the three-hour initial test phase. Figure 8 displays the IOPS attained during the second test phase, in which a skewed, 100% random read workload was placed on eight selected DPVOLs, using an address range that resided mostly on lower tiers. Note that phase two IOPS began to improve almost immediately, but spiked dramatically at about 90 minutes into the two-hour test. The big improvement at this point coincided with a large increase in planned (not real-time) relocation from Tier 2 to Tier 1 (see red band in Figure 9). Relatively few pages reached the prompt promotion threshold in phase two, but many pages (especially on Tier 2) qualified for normal promotion, and therefore a large IOPS increase occurred in the last 30 minutes of the test. Phase two saw more normal promotion than prompt promotion in part because there was only one sudden change in the address range of the busy pages, which occurred at the beginning of the second phase. For the remaining two hours of the test, the workload skew remained unchanged, which allowed time for HDT to adjust to the new I/O pattern and promote the appropriate pages. We can make the following observations about test case five: Despite a challenging workload, HDT did improve performance in test case five. However, progress was slower than in some previous test cases due to high write pending in phase one, and heavy load on Tier 3 (particularly in phase one, with declining but still high load on Tier 3 in phase two). With or without active flash, HDT won t promote pages from lower tiers when cache write pending reaches 70%. High cache write pending inhibited promotion in the first phase of test case five. As noted previously, HDT with active flash has limited ability to relieve a lower tier bottleneck caused by random writes, particularly if the lower tier is configured in RAID-6. Prompt promotion and normal HDT promotion worked well together in test case five, particularly in phase two. Early in phase two, many Tier 3 pages qualified for prompt promotion. Later, even more Tier 2 and Tier 3 pages that didn t meet prompt promotion criteria were promoted by normal HDT relocation. The normal (planned) promotion helped to produce a big performance gain towards the end of phase two. During phase one of test case five, it was noted that prompt promotion didn t begin until the completion of the current HDT monitoring cycle at the top of the hour. With active flash enabled, prompt promotion of qualifying pages should have started immediately, without waiting for the end of the monitoring cycle. It was thought that the delay in prompt promotion might have been related to a microcode bug that is triggered when heavy I/O starts with Tier 1 empty. Several attempts were made to reproduce the problem for engineering, but active flash always worked as expected during the problem reproduction attempts. 17

18 Figure 6. IOPS Achieved in Phase One of Test Case Five 18

19 Figure 7. Cache Write Pending in Test Case Five Phase One Figure 8. IOPS Achieved in Test Case Five Phase Two 19

20 Figure 9. Normal HDT (Planned) and Active Flash (Real-Time) Relocation in Test Case Five 20

21 Figure 10. Megabytes in Tier 1 by DPVOL in Test Case Five 21

22 Rules of Thumb for Best Performance Active flash is designed to get the most out of flash media by quickly promoting the busiest pages to the top tier. Another requirement for realizing the performance potential of fast drives is concurrent I/O. To facilitate concurrent I/O in HDT pools, create eight or more DPVOLs per FMD or SSD in the pool. Distribute DPVOL ownership across multiple virtual storage directors (VSDs). Size, number and ownership of pool volumes are usually insignificant for performance. For pool volumes, just create the smallest possible number of equally sized LDEVs in each parity group. A parity group should be dedicated to only one pool. Do not share parity groups between pools. Conclusions There are a few noteworthy observations regarding this testing: Active flash worked as expected in these tests. Workloads with frequent or abrupt changes in the address range of frequently accessed pages performed better with active flash enabled. Prompt promotion and high-priority demotion were both confirmed in the HDT relocation log when active flash was enabled. It was also observed that active flash activity began to work immediately in response to workload changes, without waiting for the end of the current HDT monitoring cycle. Active flash is designed to promote suddenly busy pages promptly, and, when necessary, to demote inactive pages quickly. Therefore, the total number of pages relocated may increase substantially when active flash is enabled. Increased relocation may become counterproductive if pages with a long-term history of frequent access are prematurely demoted. It s possible that workloads with repeating patterns and/or gradual shifts in the I/O frequency distribution will get better long-term results with active flash disabled. On VSP Gx00 systems, in a test designed to drive high relocation rates, the additional CPU cycles consumed by active flash were substantial. Processor busy rates across 16 cores were up to 31 points higher with active flash enabled than with active flash disabled. On the other hand, average IOPS on the same test with active flash enabled were up to 120% higher than HDT alone, so the extra processor cycles consumed by active flash were used productively. Monitor VSP Gx00 processor busy rates to be sure that recommended thresholds (40-45% busy) are not exceeded too frequently. As observed in test case five, both prompt promotion and normal HDT promotion work more slowly or not at all when cache write pending levels reach 70%. Active flash has not significantly changed HDT s ability to relieve a bottleneck on a lower tier, for the following reasons: o o o With or without active flash, HDT won t promote pages from lower tiers simply because the lower tier is overloaded. A lower tier can become overloaded yet still have few or no pages that meet the criteria for promotion. Overloading a lower tier with random writes is likely to lead to a persistent overload condition, since HDT doesn t count parity operations. This is particularly true when the lower tier is configured in RAID-6. Persistent tier overload indicates that some kind of reconfiguration is needed. Possibilities include tier expansion, a change in RAID type (for example, RAID-1+0 for random writes), implementation of tiering policy, and so on. 22

23 Appendix A: Vdbench Parameter Files Test Case One: Skewed 80% Random Read, 20% Random Write Workload With Moving Random Read Hot Band hd=default,vdbench=/scripts/charlie/vdbench504,user=root,shell=ssh hd=cb30,system= hd=cb31,system= hd=cb32,system= hd=cb33,system= sd=raw1,host=cb30,lun=/dev/sdd,openflags=o_direct sd=raw2,host=cb30,lun=/dev/sdg,openflags=o_direct sd=raw36,host=cb33,lun=/dev/sdi,openflags=o_direct wd=wd1,sd=raw*,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(00,10),skew=58 wd=wd2,sd=raw*,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(01,20),skew=9 wd=wd3,sd=raw*,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(02,30),skew=2 wd=wd4,sd=raw*,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(03,40),skew=2 wd=wd5,sd=raw*,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(04,50),skew=2 wd=wd6,sd=raw*,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(05,60),skew=2 wd=wd7,sd=raw*,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(06,70),skew=2 wd=wd8,sd=raw*,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(07,80),skew=1 wd=wd9,sd=raw*,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(08,90),skew=1 wd=wd10,sd=raw*,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(09,98),skew=1 wd=wd11,sd=raw*,xfersize=(4k,12,8k,17,16k,21,32k,30,64k,20),rdpct=0,seekpct=100,range=(0,3),skew=9 wd=wd12,sd=raw*,xfersize=(4k,12,8k,17,16k,21,32k,30,64k,20),rdpct=0,seekpct=100,range=(25,28),skew=6 wd=wd13,sd=raw*,xfersize=(4k,12,8k,17,16k,21,32k,30,64k,20),rdpct=0,seekpct=100,range=(75,78),skew=3 wd=wd14,sd=raw*,xfersize=(4k,12,8k,17,16k,21,32k,30,64k,20),rdpct=0,seekpct=100,range=(97,100),skew=2 wd=wd15,sd=raw*,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(11,20),skew=58 wd=wd16,sd=raw*,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(21,30),skew=58 wd=wd17,sd=raw*,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(31,40),skew=58 wd=wd18,sd=raw*,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(41,50),skew=58 wd=wd19,sd=raw*,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(51,60),skew=58 rd=run1,wd=(wd1-wd14),iorate=curve,curve=(10-110,10),elapsed=1200,interval=5 rd=run2,wd=(wd2-wd15),iorate=curve,curve=(10-110,10),elapsed=1200,interval=5 rd=run3,wd=(wd2-wd14,wd16),iorate=curve,curve=(10-110,10),elapsed=1200,interval=5 rd=run4,wd=(wd2-wd14,wd17),iorate=curve,curve=(10-110,10),elapsed=1200,interval=5 rd=run5,wd=(wd2-wd14,wd18),iorate=curve,curve=(10-110,10),elapsed=1200,interval=5 rd=run6,wd=(wd2-wd14,wd19),iorate=curve,curve=(10-110,10),elapsed=1200,interval=5 23

24 Test Case Two: Random, Skewed, 80/20 Read/Write Workload With Random Read Hot Band Moving Across DPVOLs at 10-Minute Intervals hd=default,vdbench=/scripts/charlie/vdbench504,user=root,shell=ssh hd=cb30,system= hd=cb31,system= hd=cb32,system= hd=cb33,system= sd=raw1,host=cb30,lun=/dev/sdd,openflags=o_direct sd=raw2,host=cb30,lun=/dev/sdg,openflags=o_direct sd=raw36,host=cb33,lun=/dev/sdi,openflags=o_direct wd=wd1,sd=raw36,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(00,10),skew=58 wd=wd2,sd=raw*,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(01,20),skew=9 wd=wd3,sd=raw*,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(02,30),skew=2 wd=wd4,sd=raw*,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(03,40),skew=2 wd=wd5,sd=raw*,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(04,50),skew=2 wd=wd6,sd=raw*,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(05,60),skew=2 wd=wd7,sd=raw*,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(06,70),skew=2 wd=wd8,sd=raw*,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(07,80),skew=1 wd=wd9,sd=raw*,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(08,90),skew=1 wd=wd10,sd=raw*,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(09,98),skew=1 wd=wd11,sd=raw*,xfersize=(4k,12,8k,17,16k,21,32k,30,64k,20),rdpct=0,seekpct=100,range=(0,3),skew=9 wd=wd12,sd=raw*,xfersize=(4k,12,8k,17,16k,21,32k,30,64k,20),rdpct=0,seekpct=100,range=(25,28),skew=6 wd=wd13,sd=raw*,xfersize=(4k,12,8k,17,16k,21,32k,30,64k,20),rdpct=0,seekpct=100,range=(75,78),skew=3 wd=wd14,sd=raw*,xfersize=(4k,12,8k,17,16k,21,32k,30,64k,20),rdpct=0,seekpct=100,range=(97,100),skew=2 wd=wd15,sd=raw35,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(11,20),skew=58 wd=wd16,sd=raw34,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(21,30),skew=58 wd=wd17,sd=raw33,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(31,40),skew=58 wd=wd18,sd=raw32,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(41,50),skew=58 wd=wd19,sd=raw31,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(51,60),skew=58 wd=wd20,sd=raw30,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(51,60),skew=58 wd=wd21,sd=raw29,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(51,60),skew=58 wd=wd22,sd=raw28,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(51,60),skew=58 wd=wd23,sd=raw27,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(51,60),skew=58 wd=wd24,sd=raw26,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(51,60),skew=58 wd=wd25,sd=raw25,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(51,60),skew=58 wd=wd26,sd=raw24,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(51,60),skew=58 wd=wd27,sd=raw23,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(51,60),skew=58 wd=wd28,sd=raw22,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(51,60),skew=58 wd=wd29,sd=raw21,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(51,60),skew=58 wd=wd30,sd=raw20,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(51,60),skew=58 wd=wd31,sd=raw19,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(51,60),skew=58 wd=wd32,sd=raw18,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(51,60),skew=58 wd=wd33,sd=raw17,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(51,60),skew=58 wd=wd34,sd=raw16,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(51,60),skew=58 wd=wd35,sd=raw15,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(51,60),skew=58 wd=wd36,sd=raw14,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(51,60),skew=58 wd=wd37,sd=raw13,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(51,60),skew=58 wd=wd38,sd=raw12,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(51,60),skew=58 wd=wd39,sd=raw11,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(51,60),skew=58 wd=wd40,sd=raw10,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(51,60),skew=58 wd=wd41,sd=raw9,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(51,60),skew=58 wd=wd42,sd=raw8,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(51,60),skew=58 wd=wd43,sd=raw7,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(51,60),skew=58 wd=wd44,sd=raw6,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(51,60),skew=58 wd=wd45,sd=raw5,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(51,60),skew=58 wd=wd46,sd=raw4,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(51,60),skew=58 wd=wd47,sd=raw3,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(51,60),skew=58 wd=wd48,sd=raw2,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(51,60),skew=58 wd=wd49,sd=raw1,xfersize=(4k,1,8k,1,16k,8,32k,15,64k,75),rdpct=100,seekpct=100,range=(51,60),skew=58 rd=run1,wd=(wd1-wd14),iorate=10000,elapsed=600,interval=5 rd=run2,wd=(wd2-wd15),iorate=10000,elapsed=600,interval=5 rd=run3,wd=(wd2-wd14,wd16),iorate=10000,elapsed=600,interval=5 rd=run4,wd=(wd2-wd14,wd17),iorate=10000,elapsed=600,interval=5 rd=run5,wd=(wd2-wd14,wd18),iorate=10000,elapsed=600,interval=5 rd=run6,wd=(wd2-wd14,wd19),iorate=10000,elapsed=600,interval=5 rd=run7,wd=(wd2-wd14,wd20),iorate=10000,elapsed=600,interval=5 rd=run8,wd=(wd2-wd14,wd21),iorate=10000,elapsed=600,interval=5 rd=run9,wd=(wd2-wd14,wd22),iorate=10000,elapsed=600,interval=5 24

25 rd=run10,wd=(wd2-wd14,wd23),iorate=10000,elapsed=600,interval=5 rd=run11,wd=(wd2-wd14,wd24),iorate=10000,elapsed=600,interval=5 rd=run12,wd=(wd2-wd14,wd25),iorate=10000,elapsed=600,interval=5 rd=run13,wd=(wd2-wd14,wd26),iorate=10000,elapsed=600,interval=5 rd=run14,wd=(wd2-wd14,wd27),iorate=10000,elapsed=600,interval=5 rd=run15,wd=(wd2-wd14,wd28),iorate=10000,elapsed=600,interval=5 rd=run16,wd=(wd2-wd14,wd29),iorate=10000,elapsed=600,interval=5 rd=run17,wd=(wd2-wd14,wd30),iorate=10000,elapsed=600,interval=5 rd=run18,wd=(wd2-wd14,wd31),iorate=10000,elapsed=600,interval=5 rd=run19,wd=(wd2-wd14,wd32),iorate=10000,elapsed=600,interval=5 rd=run20,wd=(wd2-wd14,wd33),iorate=10000,elapsed=600,interval=5 rd=run21,wd=(wd2-wd14,wd34),iorate=10000,elapsed=600,interval=5 rd=run22,wd=(wd2-wd14,wd35),iorate=10000,elapsed=600,interval=5 rd=run23,wd=(wd2-wd14,wd36),iorate=10000,elapsed=600,interval=5 rd=run24,wd=(wd2-wd14,wd37),iorate=10000,elapsed=600,interval=5 rd=run25,wd=(wd2-wd14,wd38),iorate=10000,elapsed=600,interval=5 rd=run26,wd=(wd2-wd14,wd39),iorate=10000,elapsed=600,interval=5 rd=run27,wd=(wd2-wd14,wd40),iorate=10000,elapsed=600,interval=5 rd=run28,wd=(wd2-wd14,wd41),iorate=10000,elapsed=600,interval=5 rd=run29,wd=(wd2-wd14,wd42),iorate=10000,elapsed=600,interval=5 rd=run30,wd=(wd2-wd14,wd43),iorate=10000,elapsed=600,interval=5 rd=run31,wd=(wd2-wd14,wd44),iorate=10000,elapsed=600,interval=5 rd=run32,wd=(wd2-wd14,wd45),iorate=10000,elapsed=600,interval=5 rd=run33,wd=(wd2-wd14,wd46),iorate=10000,elapsed=600,interval=5 rd=run34,wd=(wd2-wd14,wd47),iorate=10000,elapsed=600,interval=5 rd=run35,wd=(wd2-wd14,wd48),iorate=10000,elapsed=600,interval=5 rd=run36,wd=(wd2-wd14,wd49),iorate=10000,elapsed=600,interval=5 25

Evaluation Report: Improving SQL Server Database Performance with Dot Hill AssuredSAN 4824 Flash Upgrades

Evaluation Report: Improving SQL Server Database Performance with Dot Hill AssuredSAN 4824 Flash Upgrades Evaluation Report: Improving SQL Server Database Performance with Dot Hill AssuredSAN 4824 Flash Upgrades Evaluation report prepared under contract with Dot Hill August 2015 Executive Summary Solid state

More information

Hitachi Unified Storage VM Dynamically Provisioned 21,600 Mailbox Exchange 2013 Mailbox Resiliency Storage Solution

Hitachi Unified Storage VM Dynamically Provisioned 21,600 Mailbox Exchange 2013 Mailbox Resiliency Storage Solution 1 Hitachi Unified Storage VM Dynamically Provisioned 21,600 Mailbox Exchange 2013 Mailbox Resiliency Storage Solution Tested with: ESRP Storage Version 4.0 Test Date: February - March 2014 Month Year Notices

More information

White paper ETERNUS Extreme Cache Performance and Use

White paper ETERNUS Extreme Cache Performance and Use White paper ETERNUS Extreme Cache Performance and Use The Extreme Cache feature provides the ETERNUS DX500 S3 and DX600 S3 Storage Arrays with an effective flash based performance accelerator for regions

More information

Hitachi Virtual Storage Platform Family

Hitachi Virtual Storage Platform Family Hitachi Virtual Storage Platform Family Advanced Storage Capabilities for All Organizations Andre Lahrmann 23. November 2017 Hitachi Vantara Vorweg: Aus Hitachi Data Systems wird Hitachi Vantara The efficiency

More information

Benefits of Automatic Data Tiering in OLTP Database Environments with Dell EqualLogic Hybrid Arrays

Benefits of Automatic Data Tiering in OLTP Database Environments with Dell EqualLogic Hybrid Arrays TECHNICAL REPORT: Performance Study Benefits of Automatic Data Tiering in OLTP Database Environments with Dell EqualLogic Hybrid Arrays ABSTRACT The Dell EqualLogic hybrid arrays PS6010XVS and PS6000XVS

More information

Demartek December 2007

Demartek December 2007 HH:MM Demartek Comparison Test: Storage Vendor Drive Rebuild Times and Application Performance Implications Introduction Today s datacenters are migrating towards virtualized servers and consolidated storage.

More information

Using EMC FAST with SAP on EMC Unified Storage

Using EMC FAST with SAP on EMC Unified Storage Using EMC FAST with SAP on EMC Unified Storage Applied Technology Abstract This white paper examines the performance considerations of placing SAP applications on FAST-enabled EMC unified storage. It also

More information

Management Abstraction With Hitachi Storage Advisor

Management Abstraction With Hitachi Storage Advisor Management Abstraction With Hitachi Storage Advisor What You Don t See Is as Important as What You Do See (WYDS) By Hitachi Vantara May 2018 Contents Executive Summary... 3 Introduction... 4 Auto Everything...

More information

JetStor White Paper SSD Caching

JetStor White Paper SSD Caching JetStor White Paper SSD Caching JetStor 724iF(D), 724HS(D) 10G, 712iS(D), 712iS(D) 10G, 716iS(D), 716iS(D) 10G, 760iS(D), 760iS(D) 10G Version 1.1 January 2015 2 Copyright@2004 2015, Advanced Computer

More information

EMC XTREMCACHE ACCELERATES ORACLE

EMC XTREMCACHE ACCELERATES ORACLE White Paper EMC XTREMCACHE ACCELERATES ORACLE EMC XtremSF, EMC XtremCache, EMC VNX, EMC FAST Suite, Oracle Database 11g XtremCache extends flash to the server FAST Suite automates storage placement in

More information

Extreme Storage Performance with exflash DIMM and AMPS

Extreme Storage Performance with exflash DIMM and AMPS Extreme Storage Performance with exflash DIMM and AMPS 214 by 6East Technologies, Inc. and Lenovo Corporation All trademarks or registered trademarks mentioned here are the property of their respective

More information

Benchmarking Enterprise SSDs

Benchmarking Enterprise SSDs Whitepaper March 2013 Benchmarking Enterprise SSDs When properly structured, benchmark tests enable IT professionals to compare solid-state drives (SSDs) under test with conventional hard disk drives (HDDs)

More information

PowerVault MD3 SSD Cache Overview

PowerVault MD3 SSD Cache Overview PowerVault MD3 SSD Cache Overview A Dell Technical White Paper Dell Storage Engineering October 2015 A Dell Technical White Paper TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED AS IS, WITHOUT EXPRESS

More information

Assessing performance in HP LeftHand SANs

Assessing performance in HP LeftHand SANs Assessing performance in HP LeftHand SANs HP LeftHand Starter, Virtualization, and Multi-Site SANs deliver reliable, scalable, and predictable performance White paper Introduction... 2 The advantages of

More information

Technical White Paper ETERNUS AF/DX Optimization Features

Technical White Paper ETERNUS AF/DX Optimization Features Technical White Paper ETERNUS AF/DX Optimization Features Automated Storage Tiering and Automated Quality of Service Table of contents Management Summary and General Remarks 2 Introduction 3 AST Basic

More information

Low Latency Evaluation of Fibre Channel, iscsi and SAS Host Interfaces

Low Latency Evaluation of Fibre Channel, iscsi and SAS Host Interfaces Low Latency Evaluation of Fibre Channel, iscsi and SAS Host Interfaces Evaluation report prepared under contract with LSI Corporation Introduction IT professionals see Solid State Disk (SSD) products as

More information

Technical Paper. Performance and Tuning Considerations for SAS on the Hitachi Virtual Storage Platform G1500 All-Flash Array

Technical Paper. Performance and Tuning Considerations for SAS on the Hitachi Virtual Storage Platform G1500 All-Flash Array Technical Paper Performance and Tuning Considerations for SAS on the Hitachi Virtual Storage Platform G1500 All-Flash Array Release Information Content Version: 1.0 April 2018. Trademarks and Patents SAS

More information

SPC BENCHMARK 2 FULL DISCLOSURE REPORT HITACHI DATA SYSTEMS CORPORATION HITACHI UNIFIED STORAGE VM SPC-2 V1.5

SPC BENCHMARK 2 FULL DISCLOSURE REPORT HITACHI DATA SYSTEMS CORPORATION HITACHI UNIFIED STORAGE VM SPC-2 V1.5 SPC BENCHMARK 2 FULL DISCLOSURE REPORT HITACHI DATA SYSTEMS CORPORATION HITACHI UNIFIED STORAGE VM SPC-2 V1.5 Submitted for Review: January 21, 2014 First Edition January 2014 THE INFORMATION CONTAINED

More information

Four-Socket Server Consolidation Using SQL Server 2008

Four-Socket Server Consolidation Using SQL Server 2008 Four-Socket Server Consolidation Using SQL Server 28 A Dell Technical White Paper Authors Raghunatha M Leena Basanthi K Executive Summary Businesses of all sizes often face challenges with legacy hardware

More information

Lenovo RAID Introduction Reference Information

Lenovo RAID Introduction Reference Information Lenovo RAID Introduction Reference Information Using a Redundant Array of Independent Disks (RAID) to store data remains one of the most common and cost-efficient methods to increase server's storage performance,

More information

Best Practices for Deploying a Mixed 1Gb/10Gb Ethernet SAN using Dell Storage PS Series Arrays

Best Practices for Deploying a Mixed 1Gb/10Gb Ethernet SAN using Dell Storage PS Series Arrays Best Practices for Deploying a Mixed 1Gb/10Gb Ethernet SAN using Dell Storage PS Series Arrays Dell EMC Engineering December 2016 A Dell Best Practices Guide Revisions Date March 2011 Description Initial

More information

Citrix XenDesktop 5.5 on VMware 5 with Hitachi Virtual Storage Platform

Citrix XenDesktop 5.5 on VMware 5 with Hitachi Virtual Storage Platform Citrix XenDesktop 5.5 on VMware 5 with Hitachi Virtual Storage Platform Reference Architecture Guide By Roger Clark August 15, 2012 Feedback Hitachi Data Systems welcomes your feedback. Please share your

More information

Using Synology SSD Technology to Enhance System Performance Synology Inc.

Using Synology SSD Technology to Enhance System Performance Synology Inc. Using Synology SSD Technology to Enhance System Performance Synology Inc. Synology_WP_ 20121112 Table of Contents Chapter 1: Enterprise Challenges and SSD Cache as Solution Enterprise Challenges... 3 SSD

More information

LATEST INTEL TECHNOLOGIES POWER NEW PERFORMANCE LEVELS ON VMWARE VSAN

LATEST INTEL TECHNOLOGIES POWER NEW PERFORMANCE LEVELS ON VMWARE VSAN LATEST INTEL TECHNOLOGIES POWER NEW PERFORMANCE LEVELS ON VMWARE VSAN Russ Fellows Enabling you to make the best technology decisions November 2017 EXECUTIVE OVERVIEW* The new Intel Xeon Scalable platform

More information

Oracle Flash Storage System QoS Plus Operation and Best Practices ORACLE WHITE PAPER OCTOBER 2016

Oracle Flash Storage System QoS Plus Operation and Best Practices ORACLE WHITE PAPER OCTOBER 2016 Oracle Flash Storage System QoS Plus Operation and Best Practices ORACLE WHITE PAPER OCTOBER 2016 Table of Contents Introduction 1 When to Use Auto-Tiering 1 Access Skews 1 Consistent Access 2 Recommendations

More information

Performance of relational database management

Performance of relational database management Building a 3-D DRAM Architecture for Optimum Cost/Performance By Gene Bowles and Duke Lambert As systems increase in performance and power, magnetic disk storage speeds have lagged behind. But using solidstate

More information

DS8880 High Performance Flash Enclosure Gen2

DS8880 High Performance Flash Enclosure Gen2 Front cover DS8880 High Performance Flash Enclosure Gen2 Michael Stenson Redpaper DS8880 High Performance Flash Enclosure Gen2 The DS8880 High Performance Flash Enclosure (HPFE) Gen2 is a 2U Redundant

More information

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE White Paper EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE EMC XtremSF, EMC XtremCache, EMC Symmetrix VMAX and Symmetrix VMAX 10K, XtremSF and XtremCache dramatically improve Oracle performance Symmetrix

More information

VMware vsan Ready Nodes

VMware vsan Ready Nodes VMware vsan Ready Nodes Product Brief - August 31, 2017 1 of 7 VMware vsan Ready Nodes Hyperconverged Infrastructure Appliance August 2017 Making the best decisions for Information Management 2017 Evaluator

More information

Best Practices for Deploying a Mixed 1Gb/10Gb Ethernet SAN using Dell EqualLogic Storage Arrays

Best Practices for Deploying a Mixed 1Gb/10Gb Ethernet SAN using Dell EqualLogic Storage Arrays Dell EqualLogic Best Practices Series Best Practices for Deploying a Mixed 1Gb/10Gb Ethernet SAN using Dell EqualLogic Storage Arrays A Dell Technical Whitepaper Jerry Daugherty Storage Infrastructure

More information

IBM System Storage DS5020 Express

IBM System Storage DS5020 Express IBM DS5020 Express Manage growth, complexity, and risk with scalable, high-performance storage Highlights Mixed host interfaces support (FC/iSCSI) enables SAN tiering Balanced performance well-suited for

More information

VERITAS Storage Foundation 4.0 for Oracle

VERITAS Storage Foundation 4.0 for Oracle J U N E 2 0 0 4 VERITAS Storage Foundation 4.0 for Oracle Performance Brief OLTP Solaris Oracle 9iR2 VERITAS Storage Foundation for Oracle Abstract This document details the high performance characteristics

More information

Accelerating Microsoft SQL Server 2016 Performance With Dell EMC PowerEdge R740

Accelerating Microsoft SQL Server 2016 Performance With Dell EMC PowerEdge R740 Accelerating Microsoft SQL Server 2016 Performance With Dell EMC PowerEdge R740 A performance study of 14 th generation Dell EMC PowerEdge servers for Microsoft SQL Server Dell EMC Engineering September

More information

Identifying Performance Bottlenecks with Real- World Applications and Flash-Based Storage

Identifying Performance Bottlenecks with Real- World Applications and Flash-Based Storage Identifying Performance Bottlenecks with Real- World Applications and Flash-Based Storage TechTarget Dennis Martin 1 Agenda About Demartek Enterprise Data Center Environments Storage Performance Metrics

More information

Technical Note P/N REV A01 March 29, 2007

Technical Note P/N REV A01 March 29, 2007 EMC Symmetrix DMX-3 Best Practices Technical Note P/N 300-004-800 REV A01 March 29, 2007 This technical note contains information on these topics: Executive summary... 2 Introduction... 2 Tiered storage...

More information

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1 Copyright 2011, 2012 EMC Corporation. All rights reserved. Published March, 2012 EMC believes the information in this publication

More information

EMC Tiered Storage for Microsoft SQL Server 2008 Enabled by EMC Symmetrix VMAX with FAST

EMC Tiered Storage for Microsoft SQL Server 2008 Enabled by EMC Symmetrix VMAX with FAST EMC Tiered Storage for Microsoft SQL Server 2008 Enabled by EMC Symmetrix VMAX with FAST A Detailed Review Abstract This white paper examines the configuration details, efficiency, and increased performance

More information

HITACHI DYNAMIC TIERING WEBTECH SERIES

HITACHI DYNAMIC TIERING WEBTECH SERIES HITACHI DYNAMIC TIERING WEBTECH SERIES SESSION 2 OF 3 STEVE BURR, SOLUTION ARCHITECT, GSS SERVICES ENGINEERING JOHN HARKER, SENIOR PRODUCT MARKETING MANAGER JULY 13, 20 AND 27, 2011 WEBTECH EDUCATIONAL

More information

DATA PROTECTION IN A ROBO ENVIRONMENT

DATA PROTECTION IN A ROBO ENVIRONMENT Reference Architecture DATA PROTECTION IN A ROBO ENVIRONMENT EMC VNX Series EMC VNXe Series EMC Solutions Group April 2012 Copyright 2012 EMC Corporation. All Rights Reserved. EMC believes the information

More information

Vendor: Hitachi. Exam Code: HH Exam Name: Hitachi Data Systems Storage Fondations. Version: Demo

Vendor: Hitachi. Exam Code: HH Exam Name: Hitachi Data Systems Storage Fondations. Version: Demo Vendor: Hitachi Exam Code: HH0-130 Exam Name: Hitachi Data Systems Storage Fondations Version: Demo QUESTION: 1 A drive within a HUS system reaches its read error threshold. What will happen to the data

More information

Automated Storage Tiering on Infortrend s ESVA Storage Systems

Automated Storage Tiering on Infortrend s ESVA Storage Systems Automated Storage Tiering on Infortrend s ESVA Storage Systems White paper Abstract This white paper introduces automated storage tiering on Infortrend s ESVA storage arrays. Storage tiering can generate

More information

HP SAS benchmark performance tests

HP SAS benchmark performance tests HP SAS benchmark performance tests technology brief Abstract... 2 Introduction... 2 Test hardware... 2 HP ProLiant DL585 server... 2 HP ProLiant DL380 G4 and G4 SAS servers... 3 HP Smart Array P600 SAS

More information

HP EVA P6000 Storage performance

HP EVA P6000 Storage performance Technical white paper HP EVA P6000 Storage performance Table of contents Introduction 2 Sizing up performance numbers 2 End-to-end performance numbers 3 Cache performance numbers 4 Performance summary

More information

SNIA VDBENCH Rules of Thumb

SNIA VDBENCH Rules of Thumb SNIA VDBENCH Rules of Thumb Steven A. Johnson SNIA Emerald TM Training SNIA Emerald Power Efficiency Measurement Specification, for use in EPA ENERGY STAR July 14-17, 2014 SNIA Emerald TM Training ~ July

More information

TPC-E testing of Microsoft SQL Server 2016 on Dell EMC PowerEdge R830 Server and Dell EMC SC9000 Storage

TPC-E testing of Microsoft SQL Server 2016 on Dell EMC PowerEdge R830 Server and Dell EMC SC9000 Storage TPC-E testing of Microsoft SQL Server 2016 on Dell EMC PowerEdge R830 Server and Dell EMC SC9000 Storage Performance Study of Microsoft SQL Server 2016 Dell Engineering February 2017 Table of contents

More information

Virtualization of the MS Exchange Server Environment

Virtualization of the MS Exchange Server Environment MS Exchange Server Acceleration Maximizing Users in a Virtualized Environment with Flash-Powered Consolidation Allon Cohen, PhD OCZ Technology Group Introduction Microsoft (MS) Exchange Server is one of

More information

DS8880 High-Performance Flash Enclosure Gen2

DS8880 High-Performance Flash Enclosure Gen2 DS8880 High-Performance Flash Enclosure Gen2 Bert Dufrasne Kerstin Blum Jeff Cook Peter Kimmel Product Guide DS8880 High-Performance Flash Enclosure Gen2 This IBM Redpaper publication describes the High-Performance

More information

Dell PowerEdge R720xd with PERC H710P: A Balanced Configuration for Microsoft Exchange 2010 Solutions

Dell PowerEdge R720xd with PERC H710P: A Balanced Configuration for Microsoft Exchange 2010 Solutions Dell PowerEdge R720xd with PERC H710P: A Balanced Configuration for Microsoft Exchange 2010 Solutions A comparative analysis with PowerEdge R510 and PERC H700 Global Solutions Engineering Dell Product

More information

Dionseq Uatummy Odolorem Vel

Dionseq Uatummy Odolorem Vel W H I T E P A P E R Aciduisismodo Tiered Storage Design Dolore Eolore Guide Dionseq Uatummy Odolorem Vel Best Practices for Cost-effective Designs By John Harker September 2010 Hitachi Data Systems 2 Table

More information

A Performance Characterization of Microsoft SQL Server 2005 Virtual Machines on Dell PowerEdge Servers Running VMware ESX Server 3.

A Performance Characterization of Microsoft SQL Server 2005 Virtual Machines on Dell PowerEdge Servers Running VMware ESX Server 3. A Performance Characterization of Microsoft SQL Server 2005 Virtual Machines on Dell PowerEdge Servers Running VMware ESX Server 3.5 Todd Muirhead Dell Enterprise Technology Center www.delltechcenter.com

More information

Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS

Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS Applied Technology Abstract This white paper describes tests in which Navisphere QoS Manager and

More information

Planning for Easy Tier with IBM System Storage Storwize V7000 and SAN Volume Controller

Planning for Easy Tier with IBM System Storage Storwize V7000 and SAN Volume Controller Planning for Easy Tier with IBM System Storage Storwize V7000 and SAN Volume Controller May 2013 Nick Clayton Carlos Fuente Document WP102295 Systems and Technology Group 2013, International Business Machines

More information

Upgrade to Microsoft SQL Server 2016 with Dell EMC Infrastructure

Upgrade to Microsoft SQL Server 2016 with Dell EMC Infrastructure Upgrade to Microsoft SQL Server 2016 with Dell EMC Infrastructure Generational Comparison Study of Microsoft SQL Server Dell Engineering February 2017 Revisions Date Description February 2017 Version 1.0

More information

All-Flash Storage Solution for SAP HANA:

All-Flash Storage Solution for SAP HANA: All-Flash Storage Solution for SAP HANA: Storage Considerations using SanDisk Solid State Devices WHITE PAPER Western Digital Technologies, Inc. 951 SanDisk Drive, Milpitas, CA 95035 www.sandisk.com Table

More information

Deploy a High-Performance Database Solution: Cisco UCS B420 M4 Blade Server with Fusion iomemory PX600 Using Oracle Database 12c

Deploy a High-Performance Database Solution: Cisco UCS B420 M4 Blade Server with Fusion iomemory PX600 Using Oracle Database 12c White Paper Deploy a High-Performance Database Solution: Cisco UCS B420 M4 Blade Server with Fusion iomemory PX600 Using Oracle Database 12c What You Will Learn This document demonstrates the benefits

More information

Storage Tiering for the Mainframe

Storage Tiering for the Mainframe Hitachi Dynamic Tiering Storage Tiering for the Mainframe William Smith Hitachi Data Systems February 5, 2013 Session Number 12680 AGENDA Virtual Storage Platform Architecture Designed for Tiering Hitachi

More information

HP SmartCache technology

HP SmartCache technology Technical white paper HP SmartCache technology Table of contents Abstract... 2 Introduction... 2 Comparing storage technology performance... 3 What about hybrid drives?... 3 Why caching?... 4 How does

More information

vsan 6.6 Performance Improvements First Published On: Last Updated On:

vsan 6.6 Performance Improvements First Published On: Last Updated On: vsan 6.6 Performance Improvements First Published On: 07-24-2017 Last Updated On: 07-28-2017 1 Table of Contents 1. Overview 1.1.Executive Summary 1.2.Introduction 2. vsan Testing Configuration and Conditions

More information

LEVERAGING EMC FAST CACHE WITH SYBASE OLTP APPLICATIONS

LEVERAGING EMC FAST CACHE WITH SYBASE OLTP APPLICATIONS White Paper LEVERAGING EMC FAST CACHE WITH SYBASE OLTP APPLICATIONS Abstract This white paper introduces EMC s latest innovative technology, FAST Cache, and emphasizes how users can leverage it with Sybase

More information

Hitachi Data Systems. Hitachi Virtual Storage Platform Gx00 with NAS Modules NAS Platform (System LU) Migration Guide MK-92HNAS078-00

Hitachi Data Systems. Hitachi Virtual Storage Platform Gx00 with NAS Modules NAS Platform (System LU) Migration Guide MK-92HNAS078-00 Hitachi Data Systems Hitachi Virtual Storage Platform Gx00 with NAS Modules NAS Platform (System LU) Migration Guide MK-92HNAS078-00 2011-2016 Hitachi, Ltd. All rights reserved. No part of this publication

More information

IBM System Storage DS8870 Release R7.3 Performance Update

IBM System Storage DS8870 Release R7.3 Performance Update IBM System Storage DS8870 Release R7.3 Performance Update Enterprise Storage Performance Yan Xu Agenda Summary of DS8870 Hardware Changes I/O Performance of High Performance Flash Enclosure (HPFE) Easy

More information

Dell EMC SCv3020 7,000 Mailbox Exchange 2016 Resiliency Storage Solution using 7.2K drives

Dell EMC SCv3020 7,000 Mailbox Exchange 2016 Resiliency Storage Solution using 7.2K drives Dell EMC SCv3020 7,000 Mailbox Exchange 2016 Resiliency Storage Solution using 7.2K drives Microsoft ESRP 4.0 Abstract This document describes the Dell EMC SCv3020 storage solution for Microsoft Exchange

More information

Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS

Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Applied Technology Abstract This white paper describes tests in which Navisphere QoS Manager and VMware s Distributed

More information

Best Practices for SSD Performance Measurement

Best Practices for SSD Performance Measurement Best Practices for SSD Performance Measurement Overview Fast Facts - SSDs require unique performance measurement techniques - SSD performance can change as the drive is written - Accurate, consistent and

More information

Fusion iomemory PCIe Solutions from SanDisk and Sqrll make Accumulo Hypersonic

Fusion iomemory PCIe Solutions from SanDisk and Sqrll make Accumulo Hypersonic WHITE PAPER Fusion iomemory PCIe Solutions from SanDisk and Sqrll make Accumulo Hypersonic Western Digital Technologies, Inc. 951 SanDisk Drive, Milpitas, CA 95035 www.sandisk.com Table of Contents Executive

More information

Virtualized SQL Server Performance and Scaling on Dell EMC XC Series Web-Scale Hyper-converged Appliances Powered by Nutanix Software

Virtualized SQL Server Performance and Scaling on Dell EMC XC Series Web-Scale Hyper-converged Appliances Powered by Nutanix Software Virtualized SQL Server Performance and Scaling on Dell EMC XC Series Web-Scale Hyper-converged Appliances Powered by Nutanix Software Dell EMC Engineering January 2017 A Dell EMC Technical White Paper

More information

ServeRAID M5000 Series Performance Accelerator Key for System x Product Guide

ServeRAID M5000 Series Performance Accelerator Key for System x Product Guide ServeRAID M5000 Series Performance Accelerator Key for System x Product Guide The ServeRAID M5000 Series Performance Accelerator Key for System x enables performance enhancements needed by emerging SSD

More information

Certified Solution for Milestone

Certified Solution for Milestone Certified Solution for Milestone Z-series Workstations Table of Contents Executive Summary... 4 Certified Products... 4 HP Z2 Mini Quick Specs... 4 Enabling Intel Quick Synch... 5 Use Cases... 5 Workstation

More information

Increasing Performance of Existing Oracle RAC up to 10X

Increasing Performance of Existing Oracle RAC up to 10X Increasing Performance of Existing Oracle RAC up to 10X Prasad Pammidimukkala www.gridironsystems.com 1 The Problem Data can be both Big and Fast Processing large datasets creates high bandwidth demand

More information

Hitachi Data Systems. Hitachi Dynamic Provisioning with HNAS v12.1 Frequently Asked Questions MK- 92HNAS057-02

Hitachi Data Systems. Hitachi Dynamic Provisioning with HNAS v12.1 Frequently Asked Questions MK- 92HNAS057-02 Hitachi Data Systems Hitachi Dynamic Provisioning with HNAS v12.1 Frequently Asked Questions MK- 92HNAS057-02 2011-2015 Hitachi, Ltd. All rights reserved. No part of this publication may be reproduced

More information

The Benefits of Solid State in Enterprise Storage Systems. David Dale, NetApp

The Benefits of Solid State in Enterprise Storage Systems. David Dale, NetApp The Benefits of Solid State in Enterprise Storage Systems David Dale, NetApp SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless otherwise noted. Member companies

More information

Configuring Short RPO with Actifio StreamSnap and Dedup-Async Replication

Configuring Short RPO with Actifio StreamSnap and Dedup-Async Replication CDS and Sky Tech Brief Configuring Short RPO with Actifio StreamSnap and Dedup-Async Replication Actifio recommends using Dedup-Async Replication (DAR) for RPO of 4 hours or more and using StreamSnap for

More information

COMP283-Lecture 3 Applied Database Management

COMP283-Lecture 3 Applied Database Management COMP283-Lecture 3 Applied Database Management Introduction DB Design Continued Disk Sizing Disk Types & Controllers DB Capacity 1 COMP283-Lecture 3 DB Storage: Linear Growth Disk space requirements increases

More information

A Comparative Study of Microsoft Exchange 2010 on Dell PowerEdge R720xd with Exchange 2007 on Dell PowerEdge R510

A Comparative Study of Microsoft Exchange 2010 on Dell PowerEdge R720xd with Exchange 2007 on Dell PowerEdge R510 A Comparative Study of Microsoft Exchange 2010 on Dell PowerEdge R720xd with Exchange 2007 on Dell PowerEdge R510 Incentives for migrating to Exchange 2010 on Dell PowerEdge R720xd Global Solutions Engineering

More information

EMC VMAX 400K SPC-2 Proven Performance. Silverton Consulting, Inc. StorInt Briefing

EMC VMAX 400K SPC-2 Proven Performance. Silverton Consulting, Inc. StorInt Briefing EMC VMAX 400K SPC-2 Proven Performance Silverton Consulting, Inc. StorInt Briefing EMC VMAX 400K SPC-2 PROVEN PERFORMANCE PAGE 2 OF 10 Introduction In this paper, we analyze all- flash EMC VMAX 400K storage

More information

EMC FAST CACHE. A Detailed Review. White Paper

EMC FAST CACHE. A Detailed Review. White Paper White Paper EMC FAST CACHE A Detailed Review Abstract This white paper describes EMC FAST Cache technology in CLARiiON, Celerra unified, and VNX storage systems. It describes the implementation of the

More information

EMC Disk Tiering Technology Review

EMC Disk Tiering Technology Review EMC Disk Tiering Technology Review Tony Negro EMC Corporation Wednesday, February 06, 2013 12:15 PM Session Number 13154 Agenda Basis for FAST Implementation Characteristics Operational Considerations

More information

IBM InfoSphere Streams v4.0 Performance Best Practices

IBM InfoSphere Streams v4.0 Performance Best Practices Henry May IBM InfoSphere Streams v4.0 Performance Best Practices Abstract Streams v4.0 introduces powerful high availability features. Leveraging these requires careful consideration of performance related

More information

IBM Emulex 16Gb Fibre Channel HBA Evaluation

IBM Emulex 16Gb Fibre Channel HBA Evaluation IBM Emulex 16Gb Fibre Channel HBA Evaluation Evaluation report prepared under contract with Emulex Executive Summary The computing industry is experiencing an increasing demand for storage performance

More information

NetApp AFF A300 Gen 6 Fibre Channel

NetApp AFF A300 Gen 6 Fibre Channel White Paper NetApp AFF A300 Gen 6 Fibre Channel Executive Summary Faster time to revenue and increased customer satisfaction are top priorities for today s businesses. Improving business responsiveness

More information

Evaluating Real-Time Hypervisor (RTS) version 4.1 using Dedicated Systems Experts (DSE) test suite

Evaluating Real-Time Hypervisor (RTS) version 4.1 using Dedicated Systems Experts (DSE) test suite http//download.dedicated-systems.com Doc Evaluating Real-Time Hypervisor (RTS) version 4.1 using Dedicated Systems (DSE) test suite Copyright Copyright DS- NV & VUB-EmSlab. All rights reserved, no part

More information

IBM and HP 6-Gbps SAS RAID Controller Performance

IBM and HP 6-Gbps SAS RAID Controller Performance IBM and HP 6-Gbps SAS RAID Controller Performance Evaluation report prepared under contract with IBM Corporation Introduction With increasing demands on storage in popular application servers, the server

More information

Dell Reference Configuration for Large Oracle Database Deployments on Dell EqualLogic Storage

Dell Reference Configuration for Large Oracle Database Deployments on Dell EqualLogic Storage Dell Reference Configuration for Large Oracle Database Deployments on Dell EqualLogic Storage Database Solutions Engineering By Raghunatha M, Ravi Ramappa Dell Product Group October 2009 Executive Summary

More information

Virtuozzo Hyperconverged Platform Uses Intel Optane SSDs to Accelerate Performance for Containers and VMs

Virtuozzo Hyperconverged Platform Uses Intel Optane SSDs to Accelerate Performance for Containers and VMs Solution brief Software-Defined Data Center (SDDC) Hyperconverged Platforms Virtuozzo Hyperconverged Platform Uses Intel Optane SSDs to Accelerate Performance for Containers and VMs Virtuozzo benchmark

More information

The Oracle Database Appliance I/O and Performance Architecture

The Oracle Database Appliance I/O and Performance Architecture Simple Reliable Affordable The Oracle Database Appliance I/O and Performance Architecture Tammy Bednar, Sr. Principal Product Manager, ODA 1 Copyright 2012, Oracle and/or its affiliates. All rights reserved.

More information

An Oracle White Paper September Oracle Utilities Meter Data Management Demonstrates Extreme Performance on Oracle Exadata/Exalogic

An Oracle White Paper September Oracle Utilities Meter Data Management Demonstrates Extreme Performance on Oracle Exadata/Exalogic An Oracle White Paper September 2011 Oracle Utilities Meter Data Management 2.0.1 Demonstrates Extreme Performance on Oracle Exadata/Exalogic Introduction New utilities technologies are bringing with them

More information

Evaluation Report: HP StoreFabric SN1000E 16Gb Fibre Channel HBA

Evaluation Report: HP StoreFabric SN1000E 16Gb Fibre Channel HBA Evaluation Report: HP StoreFabric SN1000E 16Gb Fibre Channel HBA Evaluation report prepared under contract with HP Executive Summary The computing industry is experiencing an increasing demand for storage

More information

Storage Update and Storage Best Practices for Microsoft Server Applications. Dennis Martin President, Demartek January 2009 Copyright 2009 Demartek

Storage Update and Storage Best Practices for Microsoft Server Applications. Dennis Martin President, Demartek January 2009 Copyright 2009 Demartek Storage Update and Storage Best Practices for Microsoft Server Applications Dennis Martin President, Demartek January 2009 Copyright 2009 Demartek Agenda Introduction Storage Technologies Storage Devices

More information

INTEL NEXT GENERATION TECHNOLOGY - POWERING NEW PERFORMANCE LEVELS

INTEL NEXT GENERATION TECHNOLOGY - POWERING NEW PERFORMANCE LEVELS INTEL NEXT GENERATION TECHNOLOGY - POWERING NEW PERFORMANCE LEVELS Russ Fellows Enabling you to make the best technology decisions July 2017 EXECUTIVE OVERVIEW* The new Intel Xeon Scalable platform is

More information

System Performance: Sizing and Tuning

System Performance: Sizing and Tuning www.novell.com/documentation System Performance: Sizing and Tuning ZENworks Mobile Management 2.6.x November 2012 Legal Notices Novell, Inc., makes no representations or warranties with respect to the

More information

USING EMC FAST SUITE WITH SYBASE ASE ON EMC VNX STORAGE SYSTEMS

USING EMC FAST SUITE WITH SYBASE ASE ON EMC VNX STORAGE SYSTEMS White Paper USING EMC FAST SUITE WITH SYBASE ASE ON EMC VNX STORAGE SYSTEMS Applied Technology Abstract This white paper introduces EMC s latest innovative technology, FAST Suite, and emphasizes how users

More information

CONFIGURING ftscalable STORAGE ARRAYS ON OpenVOS SYSTEMS

CONFIGURING ftscalable STORAGE ARRAYS ON OpenVOS SYSTEMS Best Practices CONFIGURING ftscalable STORAGE ARRAYS ON OpenVOS SYSTEMS Best Practices 2 Abstract ftscalable TM Storage G1, G2 and G3 arrays are highly flexible, scalable hardware storage subsystems that

More information

LEVERAGING FLASH MEMORY in ENTERPRISE STORAGE

LEVERAGING FLASH MEMORY in ENTERPRISE STORAGE LEVERAGING FLASH MEMORY in ENTERPRISE STORAGE Luanne Dauber, Pure Storage Author: Matt Kixmoeller, Pure Storage SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless

More information

Definition of RAID Levels

Definition of RAID Levels RAID The basic idea of RAID (Redundant Array of Independent Disks) is to combine multiple inexpensive disk drives into an array of disk drives to obtain performance, capacity and reliability that exceeds

More information

IOmark- VDI. IBM IBM FlashSystem V9000 Test Report: VDI a Test Report Date: 5, December

IOmark- VDI. IBM IBM FlashSystem V9000 Test Report: VDI a Test Report Date: 5, December IOmark- VDI IBM IBM FlashSystem V9000 Test Report: VDI- 151205- a Test Report Date: 5, December 2015 Copyright 2010-2015 Evaluator Group, Inc. All rights reserved. IOmark- VDI, IOmark- VM, VDI- IOmark,

More information

PRESERVE DATABASE PERFORMANCE WHEN RUNNING MIXED WORKLOADS

PRESERVE DATABASE PERFORMANCE WHEN RUNNING MIXED WORKLOADS PRESERVE DATABASE PERFORMANCE WHEN RUNNING MIXED WORKLOADS Testing shows that a Pure Storage FlashArray//m storage array used for Microsoft SQL Server 2016 helps eliminate latency and preserve productivity.

More information

DELL EMC UNITY: BEST PRACTICES GUIDE

DELL EMC UNITY: BEST PRACTICES GUIDE DELL EMC UNITY: BEST PRACTICES GUIDE Best Practices for Performance and Availability Unity OE 4.5 ABSTRACT This white paper provides recommended best practice guidelines for installing and configuring

More information

JMR ELECTRONICS INC. WHITE PAPER

JMR ELECTRONICS INC. WHITE PAPER THE NEED FOR SPEED: USING PCI EXPRESS ATTACHED STORAGE FOREWORD The highest performance, expandable, directly attached storage can be achieved at low cost by moving the server or work station s PCI bus

More information

Introducing NVDIMM-X: Designed to be the World s Fastest NAND-Based SSD Architecture and a Platform for the Next Generation of New Media SSDs

Introducing NVDIMM-X: Designed to be the World s Fastest NAND-Based SSD Architecture and a Platform for the Next Generation of New Media SSDs , Inc. Introducing NVDIMM-X: Designed to be the World s Fastest NAND-Based SSD Architecture and a Platform for the Next Generation of New Media SSDs Doug Finke Director of Product Marketing September 2016

More information

IBM System Storage SAN Volume Controller IBM Easy Tier enhancements in release

IBM System Storage SAN Volume Controller IBM Easy Tier enhancements in release IBM System Storage SAN Volume Controller IBM Easy Tier enhancements in 7.5.0 release Kushal S. Patel, Shrikant V. Karve, Sarvesh S. Patel IBM Systems, ISV Enablement July 2015 Copyright IBM Corporation,

More information