Supermicro All-Flash NVMe Solution for Ceph Storage Cluster

Similar documents
All-NVMe Performance Deep Dive Into Ceph + Sneak Preview of QLC + NVMe Ceph

SUPERMICRO, VEXATA AND INTEL ENABLING NEW LEVELS PERFORMANCE AND EFFICIENCY FOR REAL-TIME DATA ANALYTICS FOR SQL DATA WAREHOUSE DEPLOYMENTS

BIGTWIN THE INDUSTRY S HIGHEST PERFORMING TWIN MULTI-NODE SYSTEM

Ceph in a Flash. Micron s Adventures in All-Flash Ceph Storage. Ryan Meredith & Brad Spiers, Micron Principal Solutions Engineer and Architect

SUPERMICRO NEXENTASTOR 5.0 REFERENCE ARCHITECTURE

ADVANCED IN-MEMORY COMPUTING USING SUPERMICRO MEMX SOLUTION

CAPEX Savings and Performance Advantages of BigTwin s Memory Optimization

Red Hat Ceph Storage and Samsung NVMe SSDs for intensive workloads

Enterprise Ceph: Everyway, your way! Amit Dell Kyle Red Hat Red Hat Summit June 2016

DataON and Intel Select Hyper-Converged Infrastructure (HCI) Maximizes IOPS Performance for Windows Server Software-Defined Storage

Micron 9200 MAX NVMe SSDs + Red Hat Ceph Storage 3.0

MySQL and Ceph. MySQL in the Cloud Head-to-Head Performance Lab. 1:20pm 2:10pm Room :20pm 3:10pm Room 203

Microsoft Exchange Server 2010 workload optimization on the new IBM PureFlex System

Extremely Fast Distributed Storage for Cloud Service Providers

A New Key-value Data Store For Heterogeneous Storage Architecture Intel APAC R&D Ltd.

MySQL and Ceph. A tale of two friends

Resource Saving: Latest Innovation in Optimized Cloud Infrastructure

Deep Learning Performance and Cost Evaluation

Lenovo - Excelero NVMesh Reference Architecture

Implementing SQL Server 2016 with Microsoft Storage Spaces Direct on Dell EMC PowerEdge R730xd

Micron 9200 MAX NVMe SSDs + Ceph Luminous BlueStore

Hyper-converged infrastructure with Proxmox VE virtualization platform and integrated Ceph Storage.

VEXATA FOR ORACLE. Digital Business Demands Performance and Scale. Solution Brief

Deep Learning Performance and Cost Evaluation

High-Density NVMe Systems

Hyper-converged storage for Oracle RAC based on NVMe SSDs and standard x86 servers

SSG-6028R-E1CR16T SSG-6028R-E1CR24N/L SSG-2028R-E1CR48N/L SSG-6048R-E1CR60N/L

FATTWIN SUPERSERVERS POWER RUTGERS UNIVERSITY S TOP RANKED NEW SUPERCOMPUTER

Samsung Z-SSD SZ985. Ultra-low Latency SSD for Enterprise and Data Centers. Brochure

FlashGrid Software Enables Converged and Hyper-Converged Appliances for Oracle* RAC

Re-Architecting Cloud Storage with Intel 3D XPoint Technology and Intel 3D NAND SSDs

Persistent Memory. High Speed and Low Latency. White Paper M-WP006

SUSE OpenStack Cloud Production Deployment Architecture. Guide. Solution Guide Cloud Computing.

NVMe Takes It All, SCSI Has To Fall. Brave New Storage World. Lugano April Alexander Ruebensaal

All-Flash High-Performance SAN/NAS Solutions for Virtualization & OLTP

Accelerating Real-Time Big Data. Breaking the limitations of captive NVMe storage

New Generation. A+ Solutions. Ultra BigTwin WIO Servers SuperServer Building Blocks Solutions

HP SAS benchmark performance tests

Deploy a High-Performance Database Solution: Cisco UCS B420 M4 Blade Server with Fusion iomemory PX600 Using Oracle Database 12c

Introducing SUSE Enterprise Storage 5

All-Flash High-Performance SAN/NAS Solutions for Virtualization & OLTP

Lenovo Database Configuration Guide

Low-Overhead Flash Disaggregation via NVMe-over-Fabrics Vijay Balakrishnan Memory Solutions Lab. Samsung Semiconductor, Inc.

Microsoft SQL Server in a VMware Environment on Dell PowerEdge R810 Servers and Dell EqualLogic Storage

INTEL NEXT GENERATION TECHNOLOGY - POWERING NEW PERFORMANCE LEVELS

Intel SSD Data center evolution

Achieve Optimal Network Throughput on the Cisco UCS S3260 Storage Server

TPC-E testing of Microsoft SQL Server 2016 on Dell EMC PowerEdge R830 Server and Dell EMC SC9000 Storage

Low-Overhead Flash Disaggregation via NVMe-over-Fabrics

Accelerating Microsoft SQL Server Performance With NVDIMM-N on Dell EMC PowerEdge R740

NexentaStor 5.x Reference Architecture

Hitachi Virtual Storage Platform Family

LATEST INTEL TECHNOLOGIES POWER NEW PERFORMANCE LEVELS ON VMWARE VSAN

DDN. DDN Updates. Data DirectNeworks Japan, Inc Shuichi Ihara. DDN Storage 2017 DDN Storage

Micron NVMe in the Open19 Platform: Low-Latency Storage at the Edge.

SUPERMICRO 雲解決方案. Arthur Lin Senior Field Application Engineer, APAC Region Application Optimization Department, Super Micro Computer, Inc.

All-Flash High-Performance SAN/NAS Solutions for Virtualization & OLTP

Block Storage Service: Status and Performance

Ultimate Workstation Performance

Performance Benefits of Running RocksDB on Samsung NVMe SSDs

Oracle Exadata: Strategy and Roadmap

Open storage architecture for private Oracle database clouds

Agenda. Sun s x Sun s x86 Strategy. 2. Sun s x86 Product Portfolio. 3. Virtualization < 1 >

NexentaStor 5.x Reference Architecture

A High-Performance Storage and Ultra- High-Speed File Transfer Solution for Collaborative Life Sciences Research

An Oracle White Paper December Accelerating Deployment of Virtualized Infrastructures with the Oracle VM Blade Cluster Reference Configuration

Micron and Hortonworks Power Advanced Big Data Solutions

NexentaStor 5.x Reference Architecture

Create a Flexible, Scalable High-Performance Storage Cluster with WekaIO Matrix

Sun Fire X4170 M2 Server Frequently Asked Questions

Accelerate Applications Using EqualLogic Arrays with directcache

SPDK China Summit Ziye Yang. Senior Software Engineer. Network Platforms Group, Intel Corporation

Dell EMC All-Flash solutions are powered by Intel Xeon processors. Learn more at DellEMC.com/All-Flash

Fast and Easy Persistent Storage for Docker* Containers with Storidge and Intel

IBM Power AC922 Server

NVMe over Universal RDMA Fabrics

HP ProLiant BladeSystem Gen9 vs Gen8 and G7 Server Blades on Data Warehouse Workloads

Jim Harris Principal Software Engineer Intel Data Center Group

VMware Virtual SAN Technology

IBM Power Systems HPC Cluster

Evaluation of the Chelsio T580-CR iscsi Offload adapter

Contrail Cloud Platform Architecture

The Comparison of Ceph and Commercial Server SAN. Yuting Wu AWcloud

PRESERVE DATABASE PERFORMANCE WHEN RUNNING MIXED WORKLOADS

HPE ProLiant DL580 Gen10 and Ultrastar SS300 SSD 195TB Microsoft SQL Server Data Warehouse Fast Track Reference Architecture

A Gentle Introduction to Ceph

IBM Power Advanced Compute (AC) AC922 Server

Contrail Cloud Platform Architecture

Consolidating Microsoft SQL Server databases on PowerEdge R930 server

Altair OptiStruct 13.0 Performance Benchmark and Profiling. May 2015

Ziye Yang. NPG, DCG, Intel

Hewlett Packard Enterprise HPE GEN10 PERSISTENT MEMORY PERFORMANCE THROUGH PERSISTENCE

Ceph BlueStore Performance on Latest Intel Server Platforms. Orlando Moreno Performance Engineer, Intel Corporation May 10, 2018

HPE ProLiant DL360 Gen P 16GB-R P408i-a 8SFF 500W PS Performance Server (P06453-B21)

THE CEPH POWER SHOW. Episode 2 : The Jewel Story. Daniel Messer Technical Marketing Red Hat Storage. Karan Singh Sr. Storage Architect Red Hat Storage

Dell PowerEdge R730xd Servers with Samsung SM1715 NVMe Drives Powers the Aerospike Fraud Prevention Benchmark

SolidFire and Ceph Architectural Comparison

An Oracle Technical White Paper October Sizing Guide for Single Click Configurations of Oracle s MySQL on Sun Fire x86 Servers

Accelerating Microsoft SQL Server 2016 Performance With Dell EMC PowerEdge R740

HPE ProLiant DL580 Gen10 Server

Transcription:

Table of Contents 2 Powering Ceph Storage Cluster with Supermicro All-Flash NVMe Storage Solutions 4 Supermicro Ceph OSD Ready All-Flash NVMe Reference Architecture Planning Consideration Supermicro NVMe Reference Architecture Design Configuration Consideration Ultra All-Flash NVMe OSD Ready System Micron 9200 MAX NVMe SSDs Ceph Storage Cluster - Software White Paper Supermicro All-Flash NVMe Solution for Ceph Storage Cluster High Performance Ceph Reference Architecture Based on Ultra SuperServer and Micron 9200 MAX NVMe SSDs 7 Performance Testing Establishing Baseline Performance Testing Methodology 8 4K Random Workload FIO Test Results 4K Random Write Workload Analysis 4K Random Read Workload Analysis 4K Random 70Read/30Write Workload Analysis 11 4M Object Workloads Test Result 4MB Object Write Workload Analysis 4MB Object Read Workload Analysis 13 Summary Super Micro Computer, Inc. 980 Rock Avenue San Jose, CA 95131 USA www.supermicro.com - January 2019

Powering Ceph Storage Cluster with Supermicro All-Flash NVMe Storage Solutions NVMe, an interface specification for accessing non-volatile storage media via PCI Express (PCI-E) bus, is able to provide up to 70% lower latency and up to six times the throughput/ IOPs when compared with standard SATA drives. Supermicro has a wide range of products supporting NVMe SSDs, from 1U rackmount to 4U SuperStorage servers, to 7U 8-socket mission critical servers and to 8U high-density SuperBlade server solutions. In this white paper, we investigate the performance characteristics of a Ceph cluster provisioned on all-flash NVMe based Ceph storage nodes based on configuration and performance analysis done by Micron Technology, Inc. Results published with permission. Results based on testing with Supermicro SuperServer 1029U-TN10RT. Other models and configurations may produce different results. This type of deployment choice is optimized for I/O intensive applications requiring the highest performance for either block or object storage. For different deployment scenarios and optimization goals, please refer to the table below: Goals Recommendation Optimizations Cost Effectiveness + High Storage Capacity & Density Accelerated Capacity & Density High Performance/ IOPS Intensive Supermicro 45/60/90-bay SuperStorage Systems 45/60/90 bays SuperStorage Systems with NVMe Journal All-Flash NVMe Server High capacity SATA HDDs/SSDs for maximum storage density By utilizing a few NVMe SSDs as Ceph Journal, the responsiveness of a Ceph cluster can be greatly improved while still having the capacity benefits Achieving the highest performance with all NVMe SSDs 1U Ultra 10 NVMe 1U Ultra 20 NVMe 2U SuperStorage 48 NVMe 2 Supermicro All-Flash NVMe Solution for Ceph Storage Cluster

Supermicro All-Flash NVMe Solution for Ceph Storage Cluster The table below provides some references for where Ceph Block or Object Storage are best suited for different types of workloads. Ceph Block Storage Ceph Object Storage Workloads Characteristics 1. Storage for running VM Disk Volumes 2. Deploy Elastic Block Storage with On-Premise Cloud 3. Primary storage for My-SQL & other SQL database apps 4. Storage for Skype, SharePoint and other business collaboration applications 5. Storage for IT management apps 6. Dev/Test Systems 1. Higher I/O 2. Random R/W 3. High Change Content 1. Image/Video/Audio Repository Services 2. VM Disk Volume Snapshots 3. Backup & Archive 4. ISO Image Store & Repository Service 5. Deploy Amazon S3 Object Storage like services with On-Premise Cloud 6. Deploy Dropbox like services within the Enterprise 1. Low Cost, Scale-out Storage 2. Sequential, Larger R/W 3. Lower IOPS 4. Fully API accessible 4U SuperStorage 45-Bay + 6 NVMe 7U 8-Socket 16 U.2 NVMe + AICs 8U 10x 4-Socket SuperBlade Up to 14 NVMe per Blade White Paper - January 2019 3

Supermicro Ceph OSD Ready All-Flash NVMe Reference Architecture Planning Consideration Number of Ceph Storage Nodes At least three storage nodes must be present in a Ceph cluster to become eligible for Red Hat technical support. Ten storage nodes are the recommended scale for an enterprise Ceph cluster. Four storage nodes represent a valid building block to use for scaling up to larger deployments. This RA uses four storage nodes. Number of Ceph Monitor Nodes At least three monitor nodes should be configured on separate hardware. These nodes do not require high-performance CPUs. They do benefit from having SSDs to store the monitor map data. One monitor node is the minimum, but three or more monitor nodes are typically used in production deployments. Replication Factor NVMe SSDs have high reliability with high MTBR and low bit error rate. 2x replication is recommended in production when deploying OSDs on NVMe versus the 3x replication common with legacy storage. Supermicro NVMe Reference Architecture Design Data Switches 2x Supermicro SSE-SSE-C3632SR 32x 100GbE Ports Monitor Nodes 3x Supermicro SYS-5019P Series Storage Nodes 4x Supermicro SYS-1029U-TN10RT Configuration Consideration CPU Sizing Ceph OSD processes can consume large amounts of CPU while doing small block operations. Consequently, a higher CPU core count generally results in higher performance for I/O-intensive workloads. For throughput-intensive workloads characterized by large sequential I/O, Ceph performance is more likely to be bound by the maximum network bandwidth of the cluster. Ceph Configuration Tuning Tuning Ceph for NVMe devices can be complex. The ceph.conf settings used in this RA are optimized for small block random performance. Networking A 25 GbE network is required to leverage the maximum block performance benefits of a NVMe-based Ceph cluster. For throughput-intensive workloads, 50GbE to 100GbE is recommended. 4 Supermicro All-Flash NVMe Solution for Ceph Storage Cluster

Supermicro All-Flash NVMe Solution for Ceph Storage Cluster Ultra All-Flash NVMe OSD Ready System Supermicro's Ultra SuperServer family includes a 10- and 20-drive all-flash NVMe systems in a 1U form-factor, and 24- and 48-drive All-Flash NVMe systems in a 2U form-factor architected to offer unrivaled performance, flexibility, scalability and serviceability and therefore ideally suited to demanding enterprise-sized data processing workloads. 1U 10 NVMe 1U 20 NVMe 2U 24 NVMe 2U 48 NVMe For this reference architecture, we utilized four highly flexible 10-drive systems as the Ceph Storage Nodes with the hardware configurations shown in the table below. Table 1. Component Hardware Configurations Recommendation Server Model Processor Memory NVMe Storage SATA Storage Networking Power Supplies Supermicro Ultra SuperServer 1029U-TN10RT 2x Intel Xeon Platinum 8168; 24 cores, 48 threads, up to 3.7GHz 12x Micron 32 GB DDR4-2666 DIMMs, 384 GB total per node 10x Micron 9200 MAX NVMe 6.4 TB SSDs Micron 5100 SATA SSD 2x Mellanox ConnectX-5 100 GbE dual-port Redundant 1000W Titanium Level digital power supplies 2 PCI-E 3.0 x16 slots Redundant power supplies 2 SATA DOM ports 24 DIMM slots DDR4 Dual Intel Xeon Scalable processors up to 205W 8 system fans 10 hot-swap NVMe/SATA3 drive bays White Paper - January 2019 5

Micron 9200 MAX NVMe SSDs The Micron 9200 series of NVMe SSDs is Micron's flagship performance family with the second generation NVMe SSD controller. The 9200 family has the right capacity for demanding workloads, with capacities from 1.6TB to 11TB in write-intensive, mixed-use and read-intensive designs. Micron 9200 MAX NVMe SSD Table 2. Micron 9200 MAX NVMe Specifications Model Micron 9200 MAX Interface PCI-E 3.0 x4 Form-Factor U.2 SFF Capacity 6.4 TB Sequential Read 3.35 GB/s MTTF 2M device hours Sequential Write 2.4 GB/s Random Read 800,000 IOPS Endurance 35.1 PB Random Write 260,000 IOPS Note: GB/s measured using 128K transfers, IOPS measured using 4K transfers. All data is steady state. Complete MTTF details are available in Micron's product datasheet. Ceph Storage Cluster - Software Red Hat Ceph Storage 3.0 Red Hat collaborates with the global open source Ceph community to develop new Ceph features, then packages changes into predictable, stable, enterprise-quality releases. Red Hat Ceph Storage 3.0 is based on the Ceph community Luminous version 12.2.1, to which Red Hat was a leading code contributor. As a self-healing, self-managing, unified storage platform with no single point of failure, Red Hat Ceph Storage decouples software from hardware to support block, object, and file storage on standard servers, HDDs and SSDs, significantly lowering the cost of storing enterprise data. Red Hat Ceph Storage is tightly integrated with OpenStack services, including Nova, Cinder, Manila, Glance, Keystone, and Swift, and it offers user-driven storage lifecycle management. Table 3. Ceph Storage and Monitor Nodes: Software Operating System Red Hat Enterprise Linux 7.4 Storage Red Hat Ceph Storage 3.0: Luminous 12.2.1 NIC Driver Mellanox OFED Driver 4.1.1 Table 4. Ceph Load Generation Nodes: Software Operating System Red Hat Enterprise Linux 7.4 Storage Red Hat Ceph Storage 3.0: Luminous 12.2.1 NIC Driver FIO 3.0 w/ librbd enabled Mellanox OFED Driver 4.1.1 6 Supermicro All-Flash NVMe Solution for Ceph Storage Cluster

Supermicro All-Flash NVMe Solution for Ceph Storage Cluster Performance Testing Establishing Baseline Performance Local NVMe performance shall be measured for both 4KB blocks and 4MB objects for reference. 4KB Block Each storage node was tested using FIO across all 10 9200 MAX 6.4TB NVMe SSDs. 4KB random writes were measured with 20 jobs at a queue depth of 4. 4KB random reads were measured with 50 jobs at a queue depth of 32. Table 5. 4KB Random Workloads: FIO on 10x 9200 MAX NVMe SSDs Storage Node Write IOPS Write Avg. (μs) Read Throughput Read Avg. (μs) Node 1 2.92M 26.8 7.08M 225.2 Node 2 2.94M 26.7 7.10M 224.6 Node 2 2.96M 26.5 7.09M 224.9 Node 4 2.91M 26.9 7.09M 225.0 4MB Object 4MB writes were measured with 20 jobs at a queue depth of 1. 4MB reads were measured with 20 jobs at a queue depth of 2. Table 6. 4MB Workloads: FIO on 10x 9200 MAX NVMe SSDs Storage Node Write Throughput Write Avg. (ms) Read Throughput Read Avg. (ms) Node 1 26.3 GB/s 3.2 32.2 GB/s 5.2 Node 2 25.8 GB/s 3.3 32.1 GB/s 5.2 Node 2 25.6 GB/s 3.3 32.1 GB/s 5.2 Node 4 25.4 GB/s 3.3 32.0 GB/s 5.2 Testing Methodology 4KB Random Workloads: FIO + RBD 4KB random workloads were tested using the FIO synthetic IO generation tool and the Ceph RADOS Block Device (RBD) driver. 100 RBD images were created at the start of testing. When testing on a 2x replicated pool, the RBD images were 75GB each (7.5TB of data); on a 2x replicated pool, that equals 15TB of total data stored. When testing on a 3x replicated pool, the RBD images were 50GB each (5TB White Paper - January 2019 7

of data); on a 3x pool, that also equals 15TB of total data stored. The four storage nodes have a combined total of 1.5TB of DRAM, which is 10% of the dataset size. 4MB Object Workloads: RADOS Bench RADOS Bench is a built-in tool for measuring object performance. It represents the best-case object performance scenario of data coming directly to Ceph from a RADOS Gateway node. Object workload tests were run for 10 minutes, three times each. Linux file system caches were cleared between each test. The results reported are the averages across all test runs. 4K Random Workload FIO Test Results 4KB random workloads were tested using the FIO synthetic IO generation tool and the Ceph RADOS Block Device (RBD) driver 4K Random Write Workload Analysis 4KB random writes reached a maximum of 375K IOPS at 100 clients. Average latency ramped up linearly with the number of clients, reaching a maximum of 8.5ms at 100 clients. Figures show that IOPS increased rapidly, then flattened out at 50 60 clients. At this point, Ceph is CPU-limited. 99.99% tail latency increased linearly up to 70 clients, then spiked upward, going from 59ms at 70 clients to 136ms at 80 clients. Table 7. 4KB Random Write Results FIO Clients 4KB Random Write IOPS Average 95% 99.99% Average CPU Util. 10 Clients 258,469 1.2 ms 2.1 ms 20 ms 68.2% 20 Clients 313,844 2.0 ms 3.6 ms 23 ms 81.2% 30 Clients 337.893 2.8 ms 5.4ms 28 ms 85.5% 40 Clients 349,354 3.7 ms 7.3 ms 36 ms 87.3% 50 Clients 356,939 4.5 ms 9.3 ms 42 ms 88.8% 60 Clients 362,798 5.3 ms 11 ms 50 ms 89.1% 70 Clients 367,011 6.1 ms 13 ms 58 ms 90.3% 80 Clients 370,199 6.9 ms 15 ms 136 ms 90.1% 90 Clients 372,691 7.7 ms 17 ms 269 ms 90.4% 100 Clients 374,866 8.5 ms 19 ms 382 ms 90.8% 8 Supermicro All-Flash NVMe Solution for Ceph Storage Cluster

Supermicro All-Flash NVMe Solution for Ceph Storage Cluster 4K Random Read Workload Analysis 4KB random reads scaled from 289K IOPS up to 2 Million IOPS. Ceph reached maximum CPU utilization at a queue depth of 16; increasing to queue depth 32 doubled average latency and only marginally increased IOPS. Tail latency spiked from 32.6ms at queue depth 16 to 330.7ms at queue depth 32, a result of CPU saturation above queue depth 16. Table 8. 4KB Random Read Results Queue Depth 4KB Random Read IOPS Average 95% 99.99% Average CPU Util. QD 1 288,956 0.34 ms 0.40 ms 1.9 ms 13.9% QD 2 1,024,345 0.39 ms 0.51 ms 2.7 ms 50.6% QD 8 1,558,105 0.51 ms 0.75 ms 6.5 ms 77.8% QD 16 1,955,559 0.82 ms 1.4 ms 33 ms 91.4% QD 32 2,012,677 1.6 ms 1.8 ms 330 ms 92.2% White Paper - January 2019 9

4K Random 70Read/30Write Workload Analysis 70% read/30% write testing scaled from 211K IOPS at a queue depth of 1 to 837K IOPS at queue depth 32. Read and write average latencies are graphed separately, with maximum average read latency at 2.71ms and max average write latency at 6.41ms. Tail latency spiked dramatically above queue depth 16, as the 70/30 R/W test hit CPU saturation. Above queue depth 16, there was a small increase in IOPS and a large increase in latency. Table 9. 4KB 70/30 Random Read/Write Results Queue Depth R/W IOPS Avg. Read Avg. Write 99.99% Read 99.99% Write Avg. CPU Util QD 1 211,322 0.38 ms 0.68 ms 4.2 ms 23 ms 13.9% QD 2 545,172 0.54 ms 1.2 ms 9.8 ms 24 ms 50.6% QD 8 682,920 0.82 ms 1.9 ms 20 ms 29 ms 77.8% QD 16 781,002 1.4 ms 3.5 ms 38 ms 42 ms 91.4% QD 32 837,459 2.7 ms 6.4 ms 324 ms 329 ms 92.2% 10 Supermicro All-Flash NVMe Solution for Ceph Storage Cluster

Supermicro All-Flash NVMe Solution for Ceph Storage Cluster 4M Object Workloads Test Result Object workloads were tested using RADOS Bench, a built-in Ceph benchmarking tool. These results represent the best-case scenario of Ceph accepting data from RADOS Gateway nodes. The configuration of RADOS Gateway is out of scope for this RA. 4MB Object Write Workload Analysis Writes were measured by using a constant 16 threads in RADOS Bench and scaling up the number of clients writing to Ceph concurrently. Reads were measured by first writing out 7.5TB of object data with 10 clients, then reading back that data on all 10 clients while scaling up the number of threads used. Table 10. 4MB Object Write Results Clients @ 16 Threads Write Throughput Average (ms) 2 Clients 6.6 GB/s 19.3 4 Clients 8.8 GB/s 28.9 6 Clients 9.8 GB/s 39.2 8 Clients 10.3 GB/s 49.8 10 Clients 10.1 GB/s 63.1 White Paper - January 2019 11

4MB Object Read Workload Analysis 4MB Object reads reached their maximum of 30 GB/s and 85.4ms average latency at 32 clients. CPU utilization was low for this test and never reached above 50% average CPU%. Table 11. 4MB Object Read Results 10 Clients @ Varied Threads Read Throughput Average (ms) 4 Threads 25.6 GB/s 11.8 8 Threads 26.0 GB/s 23.9 16 Threads 29.0 GB/s 43.6 32 Threads 30.0 GB/s 85.4 12 Supermicro All-Flash NVMe Solution for Ceph Storage Cluster

Supermicro All-Flash NVMe Solution for Ceph Storage Cluster Summary NVMe SSD technologies can significantly improve Ceph performance and responsiveness, especially where Ceph Block Storage is preferred where massive amount of small block random access are found. Supermicro has the most optimized NVMe enabled server and storage systems in the industry across different form-factors and price points. Please visit www.supermicro.com/ nvme to learn more. White Paper - January 2019 13

About Super Micro Computer, Inc. Supermicro (NASDAQ: SMCI), the leading innovator in high-performance, high-efficiency server technology is a premier provider of advanced server Building Block Solutions for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, HPC and Embedded Systems worldwide. Supermicro is committed to protecting the environment through its We Keep IT Green initiative and provides customers with the most energy-efficient, environmentally-friendly solutions available on the market. Learn more at www.supermicro.com About Micron Technology, Inc. We are an industry leader in innovative memory and storage solutions. Through our global brands Micron, Crucial and Ballistix our broad portfolio of high-performance memory and storage technologies, including DRAM, NAND, NOR Flash and 3D XPoint memory, is transforming how the world uses information to enrich life. Backed by 40 years of technology leadership, our memory and storage solutions enable disruptive trends, including artificial intelligence, machine learning, and autonomous vehicles, in key market segments like cloud, data center, networking, mobile and automotive. Our common stock is traded on the NASDAQ under the MU symbol. To learn more about Micron Technology, Inc., visit www.micron.com. Learn more at www.micron.com No part of this document covered by copyright may be reproduced in any form or by any means graphic, electronic, or mechanical, including photocopying, recording, taping, or storage in an electronic retrieval system without prior written permission of the copyright owner. Configuration and performance analysis done by Micron Technology, Inc. Results published with permission. Supermicro, the Supermicro logo, Building Block Solutions, We Keep IT Green, SuperServer, Twin, BigTwin, TwinPro, TwinPro², SuperDoctor are trademarks and/or registered trademarks of Super Micro Computer, Inc. Intel, the Intel logo, and Xeon are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries. All other brands names and trademarks are the property of their respective owners. Copyright Super Micro Computer, Inc. All rights reserved. Printed in USA Please Recycle 14_Ceph-Ultra_190111_Rev1