Cisco UCS C240 M4 I/O Characterization

Similar documents
Cisco UCS C220 M5 Rack Server Disk I/O Characterization

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini

Achieve Optimal Network Throughput on the Cisco UCS S3260 Storage Server

Cisco UCS C-Series I/O Characterization

Cisco HyperFlex HX220c M4 Node

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini

Accelerating Workload Performance with Cisco 16Gb Fibre Channel Deployments

Cisco UCS B460 M4 Blade Server

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes

Release Notes for Cisco UCS C-Series Software, Release 2.0(13)

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes

Deploy a High-Performance Database Solution: Cisco UCS B420 M4 Blade Server with Fusion iomemory PX600 Using Oracle Database 12c

Cisco UCS B440 M1High-Performance Blade Server

Cisco UCS C210 M2 General-Purpose Rack-Mount Server

Cisco UCS C24 M3 Server

Cisco UCS C240 M4 Rack Server with Lightning Ascend Gen. II SAS SSDs from SanDisk 36TB Data Warehouse Fast Track Reference Architecture

Cisco and Cloudera Deliver WorldClass Solutions for Powering the Enterprise Data Hub alerts, etc. Organizations need the right technology and infrastr

Cisco UCS B200 M3 Blade Server

Veritas NetBackup on Cisco UCS S3260 Storage Server

Cisco UCS B230 M2 Blade Server

Best Practices for Setting BIOS Parameters for Performance

HP SAS benchmark performance tests

Cisco HyperFlex HX220c Edge M5

Microsoft Exchange Server 2010 workload optimization on the new IBM PureFlex System

2 to 4 Intel Xeon Processor E v3 Family CPUs. Up to 12 SFF Disk Drives for Appliance Model. Up to 6 TB of Main Memory (with GB LRDIMMs)

Cisco UCS C210 M1 General-Purpose Rack-Mount Server

Implementing SQL Server 2016 with Microsoft Storage Spaces Direct on Dell EMC PowerEdge R730xd

Cisco UCS C240 M3 Server

Cisco UCS C240 M4 Rack Server with Lightning Ascend Gen. II SAS SSDs from SanDisk 36TB Data Warehouse Fast Track Reference Architecture

Cisco UCS C200 M2 High-Density Rack-Mount Server

Cisco UCS C240 M3 Server

IBM and HP 6-Gbps SAS RAID Controller Performance

Emulex LPe16000B 16Gb Fibre Channel HBA Evaluation

Cisco UCS C240 M4 High-Density Rack Server (Small Form Factor Disk Drive Model)

Evaluation Report: Improving SQL Server Database Performance with Dot Hill AssuredSAN 4824 Flash Upgrades

Managing Cisco UCS C3260 Dense Storage Rack Server

Next Generation Computing Architectures for Cloud Scale Applications

Identifying Performance Bottlenecks with Real- World Applications and Flash-Based Storage

vsan 6.6 Performance Improvements First Published On: Last Updated On:

Cisco UCS C250 M2 Extended-Memory Rack-Mount Server

Supports up to four 3.5-inch SAS/SATA drives. Drive bays 1 and 2 support NVMe SSDs. A size-converter

It s Time to Move Your Critical Data to SSDs Introduction

Table 1 The Elastic Stack use cases Use case Industry or vertical market Operational log analytics: Gain real-time operational insight, reduce Mean Ti

DataON and Intel Select Hyper-Converged Infrastructure (HCI) Maximizes IOPS Performance for Windows Server Software-Defined Storage

COMP283-Lecture 3 Applied Database Management

Cisco UCS C250 M2 Extended-Memory Rack-Mount Server

Adaptec Series 7 SAS/SATA RAID Adapters

UCS Invicta: A New Generation of Storage Performance. Mazen Abou Najm DC Consulting Systems Engineer

HP Z Turbo Drive G2 PCIe SSD

Dell PowerEdge R720xd with PERC H710P: A Balanced Configuration for Microsoft Exchange 2010 Solutions

Assessing performance in HP LeftHand SANs

Cold Storage: The Road to Enterprise Ilya Kuznetsov YADRO

Storage Optimization with Oracle Database 11g

REFERENCE ARCHITECTURE Microsoft SQL Server 2016 Data Warehouse Fast Track. FlashStack 70TB Solution with Cisco UCS and Pure Storage FlashArray//X

Data Protection for Cisco HyperFlex with Veeam Availability Suite. Solution Overview Cisco Public

Cisco UCS Virtual Interface Card 1225

Front-loading drive bays 1 12 support 3.5-inch SAS/SATA drives. Optionally, front-loading drive bays 1 and 2 support 3.5-inch NVMe SSDs.

The Oracle Database Appliance I/O and Performance Architecture

Interface Trends for the Enterprise I/O Highway

The PowerEdge M830 blade server

SSD Architecture Considerations for a Spectrum of Enterprise Applications. Alan Fitzgerald, VP and CTO SMART Modular Technologies

An Oracle Technical White Paper October Sizing Guide for Single Click Configurations of Oracle s MySQL on Sun Fire x86 Servers

BIOS Parameters by Server Model

FlashStack 70TB Solution with Cisco UCS and Pure Storage FlashArray

Cisco HyperFlex All-Flash Systems for Oracle Real Application Clusters Reference Architecture

Storage Update and Storage Best Practices for Microsoft Server Applications. Dennis Martin President, Demartek January 2009 Copyright 2009 Demartek

Reference Architecture Microsoft Exchange 2013 on Dell PowerEdge R730xd 2500 Mailboxes

VMware Virtual SAN on Cisco UCS S3260 Storage Server Deployment Guide

Assessing and Comparing HP Parallel SCSI and HP Small Form Factor Enterprise Hard Disk Drives in Server Environments

Frequently Asked Questions. s620 SATA SSD Enterprise-Class Solid-State Device

Extremely Fast Distributed Storage for Cloud Service Providers

BIOS Parameters by Server Model

Veeam Availability Suite on Cisco UCS S3260

Cisco UCS S3260 System Storage Management

Commvault MediaAgent on Cisco UCS S3260 Storage Server

HP solutions for mission critical SQL Server Data Management environments

Slide 0 Welcome to this Web Based Training session introducing the ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2 storage systems from Fujitsu.

What is QES 2.1? Agenda. Supported Model. Live demo

I/O CANNOT BE IGNORED

Cisco Content Delivery Engine 285 for Open Media Distribution

Low Latency Evaluation of Fibre Channel, iscsi and SAS Host Interfaces

Deep Learning Performance and Cost Evaluation

Overview. About the Cisco UCS S3260 System

Dell EMC Microsoft Exchange 2016 Solution

Deep Learning Performance and Cost Evaluation

vstart 50 VMware vsphere Solution Specification

White Paper. EonStor GS Family Best Practices Guide. Version: 1.1 Updated: Apr., 2018

Commvault ScaleProtect with Cisco UCS S3260 Storage Server

SAS Technical Update Connectivity Roadmap and MultiLink SAS Initiative Jay Neer Molex Corporation Marty Czekalski Seagate Technology LLC

CISCO MEDIA CONVERGENCE SERVER 7825-I1

Intel Select Solutions for Professional Visualization with Advantech Servers & Appliances

New Approach to Unstructured Data

Using Synology SSD Technology to Enhance System Performance Synology Inc.

memory VT-PM8 & VT-PM16 EVALUATION WHITEPAPER Persistent Memory Dual Port Persistent Memory with Unlimited DWPD Endurance

Cisco 4000 Series Integrated Services Routers: Architecture for Branch-Office Agility

Cisco UCS Virtual Interface Card 1227

Dell PowerEdge R730xd Servers with Samsung SM1715 NVMe Drives Powers the Aerospike Fraud Prevention Benchmark

Cisco MCS 7828-I5 Unified Communications Manager Business Edition 5000 Appliance

Cisco MCS 7825-I1 Unified CallManager Appliance

Cisco Content Delivery Engine 280 for TV Streaming

Transcription:

White Paper Cisco UCS C240 M4 I/O Characterization Executive Summary This document outlines the I/O performance characteristics of the Cisco UCS C240 M4 Rack Server using the Cisco 12-Gbps SAS modular RAID controller (UCSC-MRAID12G). Performance comparisons of various serialattached Small Computer System Interface (SCSI; SAS) hard-disk drives (HDDs), solid-state drives (SSDs), Redundant Array of Independent Disk (RAID) configurations, and controller options are presented. This white paper aims to assist customers in making well-informed decisions in choosing the right internal disk types and configuring the right controller options and RAID levels to meet their individual I/O workload needs. Performance data was obtained using the Iometer measurement tool, with analysis based on the metrics of input/output operations per second (IOPS) for random I/O workloads, and megabytes per second (MBps) throughput for sequential I/O workloads. From this analysis, specific recommendations are made for storage configuration parameters. Many combinations of different drive types and RAID levels are possible. For these characterization tests, performance evaluations were limited to 7200-rpm small form factor (SFF) HDDs, 10,000-rpm SFF HDDs, and SFF SSD drive types, with configurations of RAID 0, RAID 5, and RAID 10 virtual disks. Tables 1 and 2 list the recommended virtual drive configuration parameters for deployment of SSDs and HDDs. Table 1. Recommended Virtual Drive Configuration Parameters for Solid-State Drives Workload RAID Level Strip Size Disk Cache Policy I/O Cache Policy Read Policy Write Policy Random I/O RAID 0 256 KB Disabled Direct Normal Read Write Through Random I/O RAID 10 256 KB Disabled Direct Normal Read Write Through Random I/O RAID 5 256 KB Disabled Direct Normal Read Write Through Sequential I/O RAID 0 256 KB Disabled Direct Normal Read Write Back w/bbu Sequential I/O RAID 10 256 KB Disabled Direct Normal Read Write Back w/bbu Sequential I/O RAID 5 256 KB Disabled Direct Normal Read Write Back w/bbu Table 2. Recommended Virtual Drive Configuration Parameters for Hard-Disk Drives Workload RAID Level Strip Size Disk Cache Policy I/O Cache Policy Read Policy Write Policy Random I/O RAID 0 256 KB Disabled Cached Read Ahead Write Back w/bbu Random I/O RAID 10 256 KB Disabled Cached Read Ahead Write Back w/bbu Random I/O RAID 5 256 KB Disabled Cached Read Ahead Write Back w/bbu Sequential I/O RAID 0 256 KB Disabled Cached Normal Read Write Back w/bbu Sequential I/O RAID 10 256 KB Disabled Cached Normal Read Write Back w/bbu Sequential I/O RAID 5 256 KB Disabled Cached Normal Read Write Back w/bbu 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 1 of 22

Cisco UCS C240 M4 Overview The Cisco UCS C240 M4 Rack Server is an enterprise-class server in a 2-rack-unit (2RU) form factor. It is designed to deliver exceptional performance, expandability, and efficiency for storage and I/O-intensive infrastructure workloads. These workloads include big data analytics, virtualization, and graphics-intensive and bare-metal applications. The Cisco UCS C240 M4 server provides: Dual Intel Xeon processor E5-2600 v3 CPUs for improved performance suitable for nearly all 2-socket applications Next-generation double-data-rate 4 (DDR4) memory and 12-Gbps SAS throughput Innovative Cisco UCS virtual interface card (VIC) support in PCI Express (PCIe) or modular LAN-onmotherboard (mlom) form factor Graphics-intensive experiences to more virtual users with support for the latest NVIDIA graphics processing units (GPUs) The Cisco UCS C240 M4 server also offers exceptional reliability, availability, and serviceability (RAS) features, including: Tool-free CPU insertion Easy-to-use latching lid Hot-swappable and hot-pluggable components Redundant Cisco Flexible Flash Secure Digital (SD) cards The Cisco UCS C240 M4 server can be deployed as a standalone device or as part of a managed Cisco Unified Computing System (Cisco UCS) environment. Cisco UCS unifies computing, networking, management, virtualization, and storage access into a single integrated architecture that can enable end-to-end server visibility, management, and control in both bare-metal and virtualized environments. With a Cisco UCS managed deployment, the Cisco UCS C240 M4 takes advantage of our standards-based unified computing innovations to significantly reduce customers total cost of ownership (TCO) and increase business agility. Specifications at a Glance Table 3. Server Specifications Item Chassis Processors Chipset Memory PCIe slots Hard drives Embedded NIC mlom Specification Two rack-unit (2RU) server Either 1 or 2 Intel Xeon processor E5-2600 v3 product family CPUs Intel C610 series Up to 24 DDR4 dual in-line memory modules (DIMMs) with speeds of up to 2133 MHz Up to 6 PCI Express (PCIe) Generation 3 slots (4 full-height and full-length, 4 network connectivity status indicator (NCSI) capable and VIC ready, and 2 GPU ready) Up to 24 small form factor (SFF) drives or 12 large form factor (LFF) drives, plus 2 optional internal SATA boot drives Two 1-Gbps Intel i350-based Gigabit Ethernet ports Slot can flexibly accommodate 1-Gbps, 10-Gbps, or 40-Gbps adapters 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 2 of 22

Item RAID controller Specification Cisco 12-Gbps SAS modular RAID controller (UCSC-MRAID12G) for internal drives Cisco 9300-8E 12-Gbps SAS HBA for external drives Embedded software RAID (entry RAID solution) for up to 4 SATA drives Cisco UCS C240 M4 Models The Cisco UCS C240 M4 server can be configured in four different models to match specific customer environments. Figure 1. Cisco UCS C240 M4SX The Cisco UCS C240 M4SX provides outstanding storage expandability and performance with up to 24 x 2.5-inch 12-Gbps SFF HDDs or SSDs plus two optional internal 2.4-inch boot drives (Figure 1). Figure 2. Cisco UCS C240 M4L The Cisco UCS C240 M4L is the choice for outstanding storage capability with up to 12 x 3.5-inch 12-Gbps LFF HDDs plus two optional internal 2.5-inch boot drives (Figure 2). Figure 3. Cisco UCS C240 M4S2 The Cisco UCS C240 M4S2 balances cost and expandability with up to 16 x 2.5-inch 12-Gbps SFF HDDs or SSDs (Figure 3). 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 3 of 22

Figure 4. Cisco UCS C240 M4S The Cisco UCS C240 M4S is the cost-effective choice with up to 8 x 2.5-inch 12-Gbps SFF HDDs or SSDs (Figure 4). For details about how to configure specific models, please refer to the appropriate specification sheets: Small form factor models (either HDD or SSD): http://www.cisco.com/c/dam/en/us/products/collateral/servers-unified-computing/ucs-c-series-rackservers/c240m4-sff-spec-sheet.pdf Large form factor HDD model: http://www.cisco.com/c/dam/en/us/products/collateral/servers-unifiedcomputing/ucs-c-series-rack-servers/c240m4-lff-spec-sheet.pdf Cisco 12-Gbps SAS Modular RAID Controller The Cisco UCS C240 M4 Rack Server is equipped with the Cisco 12-Gbps SAS modular RAID controller for internal drives. The controller can be ordered with modular flash-based write cache (FBWC) options of 1 GB, 2 GB, or 4 GB. Virtual Disk Options The following controller options can be configured with virtual disks to accelerate write and read performance and provide data integrity: RAID level Strip size Access policy Disk cache policy I/O cache policy Read policy Write policy 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 4 of 22

RAID Level Table 4 describes the supported RAID levels and their characteristics. Table 4. RAID Levels and Characteristics RAID Level Characteristics Parity Redundancy RAID 0 Striping of 2 or more disks to achieve optimal performance No No RAID 1 Data mirroring on 2 disks for redundancy with slight performance improvement No Yes RAID 5 Data striping with distributed parity for improved and fault tolerance Yes Yes RAID 6 Data striping with dual parity with dual fault tolerance Yes Yes RAID 10 Data mirroring and striping for redundancy and performance improvement No Yes RAID 50 Block striping with distributed parity for high fault tolerance Yes Yes RAID 60 Block striping with dual parity with performance improvement Yes Yes For these performance characterization tests, RAID 0, RAID 5, and RAID 10 were used. To avoid poor write performance, full initialization is always recommended when creating a RAID 5 or RAID 6 virtual disk. Depending on the virtual disk size, the full initialization process can take a long time. In this mode, the controller is fully utilized to perform the initialization and blocks any I/O operations. Fast initialization is not recommended for RAID 5 and RAID 6 virtual disks. Strip Size Strip size specifies the length of the data segments that the controller writes across multiple drives, not including parity drives. Strip size can be configured as 64 KB, 128 KB, 256 KB, 512 KB, or 1 MB. Strip Size Versus Stripe Size A virtual disk consists of two or more physical drives that are configured together through a RAID controller to appear as a single logical drive. To improve overall performance, RAID controllers break data up into discrete chunks called strips that are distributed one after another across the physical drives in a virtual disk. A stripe is the collection of one set of strips across the physical drives in a virtual disk. Stripe size is not configured. It is a product of the strip size, the number of physical drives in the virtual disk, and the RAID level. Access Policy RW: Read and write access is permitted. Read Only: Read access is permitted, but write access is denied. Blocked: No access is permitted. Disk Cache Policy Disabled: Disk cache is disabled. The drive sends a data transfer completion signal to the controller when the disk media has actually received all the data in a transaction. This process helps ensure data integrity in the event of a power failure. : Disk cache is enabled. The drive sends a data transfer completion signal to the controller when the drive cache has received all the data in a transaction. However, the data has not actually been transferred to the disk media, so data may be permanently lost in the event of a power failure. Although disk caching can accelerate I/O performance, it is not recommended for enterprise deployments. 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 5 of 22

I/O Cache Policy Direct: Data transfers from read and write operations are not buffered in cache memory. Cached: Data transfers from read and write operations are buffered in cache memory. Subsequent read requests for the same data can then be satisfied from the cache. Note that Cached I/O refers to the caching of read data, and Read Ahead refers to the caching of speculative future read data. Read Policy No Read Ahead (Normal Read): Only the requested data is read, and the controller does not read ahead any data. Always Read Ahead: The controller reads sequentially ahead of requested data and stores the additional data in cache memory, anticipating that the data will be needed soon. Write Policy Write Through: Data is written directly to the disks. The controller sends a data transfer completion signal to the host when the drive subsystem has received all of the data in a transaction. Write Back: Data is first written to the controller cache memory, and when it receives acknowledgment from the host, data is flushed to the disks. Data is written to the disks when the commit operation occurs at the controller cache. The controller sends a data transfer completion signal to the host when the controller cache has received all the data in a transaction. Write Back with Battery Backup: Battery backup is used to provide data integrity protection in the event of a power failure. Battery backup is always recommended for enterprise deployments. HDD Versus SSD The choice of hard-disk drives (HDD) or solid-state drives (SSD) is critical for enterprise customers and involves considerations of performance, reliability, price, and capacity. Part of the challenge is the sheer size of today s data. The huge growth in data is threatening traditional computing infrastructures based on HDDs. However, the problem isn t simply growth; it is also the speed at which applications operate. Processor and networking speeds have kept pace with application demands, but storage subsystems have not, as illustrated in Figure 5. Figure 5. HDD Speed Improvements Have Lagged Dramatically Behind Those of Other Technologies 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 6 of 22

The mechanical nature of HDDs in high I/O environments is to blame. Deployment of very fast SSDs is the increasingly popular solution to this problem. Performance Without question, SSDs are faster than HDDs. HDDs have an unavoidable overhead because they physically scan the disk for read and write operations. In an HDD array, I/O read and write requests are directed to physical disk locations. In response, the platter spins and the disk drive heads seek the location to write or read the I/O request. Latency from noncontiguous write locations multiplies the seek-time problem. SSDs have no physical tracks or sectors and no mechanical movement. Thus, SSDs can reach memory addresses much more quickly than HDD heads can physically move. Because SSDs have no moving parts, there is no mechanical seek time or latency to overcome. Even the fastest 15,000-rpm HDDs may not keep pace with SSDs in a high-demand I/O environment. Parallel disks, caching, and additional memory certainly help, but the inherent physical disadvantages have limited the ability of HDDs to keep pace with today s seemingly limitless data growth. Reliability Not all SSDs are designed same. Cisco uses several different technologies and design requirements to help ensure that our SSDs can meet the reliability and endurance demands of server storage. Reliability depends on many factors, including use, physical environment, application I/O demands, vendor, and mean time before failure (MTBF). In challenging environments, the physical reliability of SSDs is clearly better than that of HDDs given SSDs lack of mechanical parts. SSDs can survive cold and heat, drops, and extreme G-forces. However, these extreme conditions are not a factor in typical data centers. Although SSDs have no moving heads or spindles, they have their own unique stress points and failures in components such as transistors and capacitors. As an SSD ages, its performance slows. The processor must read, modify, erase and write increasing amounts of data. Eventually, memory cells wear out. Some common SSD points of failure include: Bit errors: Random data bits may be stored in cells. Flying or shorn writes: Correct writes may be written in the wrong location, or write operations may be truncated due to power loss. Unserializability: Writes may be recorded in the wrong order. Firmware: Firmware may fail, become corrupted, or upgrade improperly. Electronic failures: Even though SSDs have no moving parts, physical components such as chips and transistors may fail. Power outages: SSDs are subject to damaged file integrity if they are reading or writing during power interruptions. As SSD technology evolves, manufacturers are improving their reliability processes. With maturity, reliability will become a smaller differentiating factor in the choice between SSDs and HDDs. 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 7 of 22

Enterprise Performance SSDs Versus Enterprise Value SSDs To meet the requirements of different application environments, Cisco offers both Enterprise Performance SSDs and Enterprise Value SSDs. They all deliver superior performance compared to HDDs; however, Enterprise Performance SSDs support higher read/write workload levels and have a longer expected service life. Enterprise Value SSDs provides relatively large storage capacities at lower cost, but they do not have the endurance of the Enterprise Performance SSDs. Enterprise Performance SSDs provide high endurance and support up to 10 full drive writes per day. These SSDs are targeted at write-centric I/O applications such as caching, online transaction processing (OLTP), data warehousing, and virtual desktop infrastructure (VDI). Enterprise Value SSDs provide low endurance, and support up to 1 full drive write per day. These SSDs are targeted at read-centric I/O applications such as OS boot, streaming media, and collaboration. Price When considering price, it is important to differentiate between Enterprise Performance SSDs and Enterprise Value SSDs. It is important to recognize the significant differences between the two in performance, cost, reliability, and targeted applications. Although it can be appealing to integrate SSDs with NAND flash technology into an enterprise storage solution to improve performance, the cost of doing so on a large scale may be prohibitive. When price is measured on a per-gigabyte basis ($/GB), SSDs are significantly more expensive than HDDs. Even when price is measured in terms of bandwidth per GBps ($/GB/s), Enterprise Performance SSDs remain more expensive. In addition to the price of individual drives, total cost of ownership (TCO) should be considered. The higher performance of SSDs may allow I/O demands to be met with a lower number of SSDs than HDDs, providing a TCO advantage. Capacity SSDs are available in either the 3.5-inch large form factor or the 2.5-inch small form factor. HDDs are available in the 3.5-inch large form factor with 7200-rpm speed, and 2.5-inch small form factor with 7200-rpm, 10,000-rpm, and 15,000-rpm speeds. SSDs provide the fastest performance. Large form factor HDDs offer the highest overall storage capacity, and small for factor HDDs provide more options. The choice of drive dramatically affects the overall storage capacity of the server. Table 5 lists the highest-density drives for each drive type that can be configured with the Cisco UCS C240 M4 Rack Server at the time of this writing. Table 5. Drive Type and Density Drive Type Form Factor Speed Highest-Density Drive Max Front Drive Slots Max Raw Capacity SSD 2.5 SFF - 1.6 TB 24 38.4 TB SSD 3.5 LFF - 1.6 TB 12 19.2 TB HDD 2.5 SFF 15,000 rpm 600 GB 24 14 TB HDD 2.5 SFF 10,000 rpm 1.2 TB 24 28.8 TB HDD 2.5 SFF 7200 rpm 1 TB 24 24 TB HDD 3.5 SFF 7200 rpm 4 TB 12 48 TB 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 8 of 22

HDD Versus SSD: Summary Customers should consider both performance and price when choosing between SSDs and HDDs. SSDs offer significant benefits for some workloads. Customer applications with the most random data requirements will see the greatest benefit from SSDs over HDDs. Even for sequential workloads, SSDs can offer increased I/O performance over HDDs. However, the performance improvement may not justify their additional cost for sequential operations. Therefore, Cisco recommends HDDs for predominantly sequential I/O applications. For typical random workloads, SSDs offer tremendous performance improvements with less concern about reliability and write endurance and wear-out. Performance improves further as applications become more parallel and use the full capabilities of SSDs with tiering software or caching applications. The performance improvements gained from SSDs can provide a strong justification for their additional cost for random operations. Therefore, Cisco recommends SSDs for random I/O applications. Iometer Overview Iometer is a workload generator and measurement tool for I/O subsystems of single and clustered servers. As a workload generator, Iometer performs controlled I/O operations to stress the system. As a measurement tool, Iometer examines and records the performance of its I/O operations and their impact on the system. Iometer is useful for both benchmarking and troubleshooting. Different workloads can be configured through an access specification file to replicate a variety of I/O behaviors. Configurable parameters include: Transfer request size Percentage distribution of random and sequential operations Percentage distribution of read/write operations Aligned I/O Reply size TCP/IP status Burstiness Number of workers Outstanding I/O Iometer collects a wide variety of measurement data, including: IOPS (total, read, and write) MiBps: binary (total, read, and write) MBps: decimal (total, read, and write) Transactions per second Connections per second Average response time (total, read, and write) Average transaction time Average connection time Maximum response time (total, read, and write) 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 9 of 22

Maximum transaction time Maximum connection time Errors (total, read, and write) Bytes read Bytes written Read I/O operations Write I/O operations Connections Transactions per connection Total raw response time (read and write) Total raw transaction time Total raw connection time Maximum raw response time (read and write) Maximum raw transaction time Maximum raw connection time Total raw run time Starting sector Maximum size Queue depth Percentage of CPU utilization Percentage of user time Percentage of privileged time Percentage of DPC time Percentage of interrupt time Processor speed Interrupts per second CPU effectiveness Packets per second Packet errors Segments retransmitted per second The relevant metrics used for this performance characterization are: IOPS: I/O operations per second is a common performance metric used to measure computer storage devices, including HDDs and SSDs. This metric is used to evaluate performance for random I/O workloads. MBps: Megabytes per second measures throughput, or the amount of data transferred to a computer storage device. MBps should not be confused with the abbreviation Mbps, which refers to megabits per second. This metric is used to evaluate performance for sequential I/O workloads. More information is available at http://www.iometer.org. 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 10 of 22

SDD Performance Results Figures 6 through 15 were prepared from Iometer measurement data. They illustrate the I/O performance of 400- GB 6-Gbps SAS Enterprise Performance (high endurance) SSDs used in these comparison tests. Figure 6. SSD Performance for Random I/O (50% Read and 50% Write) Figure 7. SSD Performance for Random I/O (70% Read and 30% Write) 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 11 of 22

Figure 8. SSD Performance for Sequential Read I/O (100% Read and 0% Write) Figure 9. SSD Performance for Sequential Write I/O (0% Read and 100% Write) 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 12 of 22

Figures 10 and 11 show that strip size has little effect on random I/O performance of SSDs. These comparisons are based on measurements using a 4-KB block size. Figure 10. SSD Performance Comparing Strip Size for Random I/O (50% Read and 50% Write) Figure 11. SSD Performance Comparing Strip Size for Random I/O (70% Read and 30% Write) 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 13 of 22

Figure 12. SSD RAID 0 Performance Comparing 8, 16, and 24 Drives for Random I/O (50% Read and 50% Write) Figure 13. SSD RAID 0 Performance Comparing 8, 16, and 24 Drives for Random I/O (70% Read and 30% Write) 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 14 of 22

Figure 14. SSD RAID 0 Performance Comparing 8, 16, and 24 Drives for Sequential Read I/O (100% Read and 0% Write) Figure 15. SSD RAID 0 Performance Comparing 8, 16, and 24 Drives for Sequential Write I/O (0% Read and 100% Write) 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 15 of 22

10,000-rpm HDD Performance Results Figures 16 through 19 were prepared from Iometer measurement data. They illustrate the I/O performance of 1.2- TB 6-Gbps SAS 10,000-rpm SFF HDDs used in these comparison tests. Figure 16. 10,000-rpm HDD Performance for Random I/O (50% Read and 50% Write) Figure 17. 10,000-rpm HDD Performance for Random I/O (70% Read and 30% Write) 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 16 of 22

Figure 18. 10,000-rpm HDD Performance for Sequential Read I/O (100% Read and 0% Write) Figure 19. 10,000-rpm HDD Performance for Sequential Write I/O (0% Read and 100% Write) 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 17 of 22

7200-rpm HDD Performance Results Figures 20 through 23 were prepared from Iometer measurement data. They illustrate the I/O performance of 1-TB 6-Gbps SAS 7200-rpm SFF HDDs used in these comparison tests. Figure 20. 7200-rpm HDD Performance for Random I/O (50% Read and 50% Write) Figure 21. 7200-rpm HDD Performance for Random I/O (70% Read and 30% Write) 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 18 of 22

Figure 22. 7200-rpm HDD Performance for Sequential Read I/O (100% Read and 0% Write) Figure 23. 7.2K RPM HDD Performance for Sequential Write I/O (0% Read, 100% Write) For More Information http://www.cisco.com/c/en/us/products/servers-unified-computing/ucs-c240-m4-rack-server/index.html 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 19 of 22

Appendix A. Test Environment Tables 6 through 10 detail the hardware and software configuration used for these I/O performance measurement tests, including server properties, BIOS properties, virtual drive policies, and Iometer settings. Table 6. Server Properties Name Value Product Name Cisco UCS C240 MSX CPUs (2) Intel Xeon processor CPU E5-2699 v3 at 2.30 GHz Number of Cores 36 Number of Threads 72 Total Memory 256 GB Memory DIMMS (16) 16 GB at 2 DPC Memory Speed 2133 MHz Network Controller Intel I350 1-Gbps network controller VIC Adapter Cisco UCS VIC 1225 10-Gbps 2-port converged network adapter (CNA) Enhanced Small Form-Factor Pluggable (SFP+) Storage Controller Intel I350 mlom 1-Gbps network controller RAID Controller Cisco 12-Gbps SAS modular RAID controller SSDs (24) 400-GB 6-Gbps SAS Enterprise Performance SSD (Toshiba PX02SMF040) 10,000-rpm HDDs (24) 1.2-TB 6-Gbps SAS 10,000-rpm SFF HDD (Seagate ST1200MM0007) 7200-rpm HDDs (24) 1-TB 6-Gbps SAS 7200-rpm SFF HDD (Seagate ST91000640SS) Table 7. BIOS Properties Name BIOS Version Intel Hyper-Threading Technology Number of Cores Execute Disable Bit Intel VT Intel VT-d Intel VT-d Coherency Support Intel VT-d ATS Support CPU Performance Hardware Prefetcher Adjacent Cache Line Prefetcher DCU Streamer Prefetch DCU IP Prefetcher Direct Cache Access Support Power Technology Enhanced Intel Speedstep Technology Intel Turbo Boost Technology Processor Power State C6 Processor Power State C1 Enhanced P-STATE Coordination Energy Performance Value C240M4.2.0.3d.0.111120141511 All Disabled Disabled Disabled Disabled Enterprise Disabled Disabled HW ALL Balanced Performance 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 20 of 22

Name Select Memory RAS NUMA Channel Interleaving Rank Interleaving Patrol Scrub Demand Scrub Altitude Value Maximum Performance Auto Auto 300 M Table 8. SSD Virtual Drive Policies Workload RAID Level Strip Size Disk Cache Policy I/O Cache Policy Read Policy Write Policy Random I/O RAID 0 256 KB Disabled Direct Normal Read Write Through Random I/O RAID 10 256 KB Disabled Direct Normal Read Write Through Random I/O RAID 5 256 KB Disabled Direct Normal Read Write Through Sequential I/O RAID 0 256 KB Disabled Direct Normal Read Write Back w/bbu Sequential I/O RAID 10 256 KB Disabled Direct Normal Read Write Back w/bbu Sequential I/O RAID 5 256 KB Disabled Direct Normal Read Write Back w/bbu Table 9. HDD Virtual Drive Policies Workload RAID Level Strip Size Disk Cache Policy I/O Cache Policy Read Policy Write Policy Random I/O RAID 0 256 KB Disabled Cached Read Ahead Write Back w/bbu Random I/O RAID 10 256 KB Disabled Cached Read Ahead Write Back w/bbu Random I/O RAID 5 256 KB Disabled Cached Read Ahead Write Back w/bbu Sequential I/O RAID 0 256 KB Disabled Cached Normal Read Write Back w/bbu Sequential I/O RAID 10 256 KB Disabled Cached Normal Read Write Back w/bbu Sequential I/O RAID 5 256 KB Disabled Cached Normal Read Write Back w/bbu Table 10. Iometer Settings Name Value Iometer Version 1.1.0 Run Time 10 Minutes (per Access Specification) Ramp Up Time 0 Seconds Record Results All Number of Workers 72 (equal to number of processor threads) Number of Outstanding I/Os Per Target 16 Write I/O Data Pattern Repeating Bytes Transfer Delay 0 ms Burst Length 1 I/O Align I/Os on Request Size Boundaries Reply Size No Reply 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 21 of 22

Printed in USA C11-734798 -00 06/15 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 22 of 22