All-NVMe Performance Deep Dive Into Ceph + Sneak Preview of QLC + NVMe Ceph

Similar documents
Ceph in a Flash. Micron s Adventures in All-Flash Ceph Storage. Ryan Meredith & Brad Spiers, Micron Principal Solutions Engineer and Architect

Supermicro All-Flash NVMe Solution for Ceph Storage Cluster

Micron 9200 MAX NVMe SSDs + Ceph Luminous BlueStore

Micron 9200 MAX NVMe SSDs + Red Hat Ceph Storage 3.0

Ceph BlueStore Performance on Latest Intel Server Platforms. Orlando Moreno Performance Engineer, Intel Corporation May 10, 2018

A New Key-value Data Store For Heterogeneous Storage Architecture Intel APAC R&D Ltd.

Low-Overhead Flash Disaggregation via NVMe-over-Fabrics Vijay Balakrishnan Memory Solutions Lab. Samsung Semiconductor, Inc.

Low-Overhead Flash Disaggregation via NVMe-over-Fabrics

THE CEPH POWER SHOW. Episode 2 : The Jewel Story. Daniel Messer Technical Marketing Red Hat Storage. Karan Singh Sr. Storage Architect Red Hat Storage

Understanding Write Behaviors of Storage Backends in Ceph Object Store

Red Hat Ceph Storage and Samsung NVMe SSDs for intensive workloads

MySQL and Ceph. A tale of two friends

Re-Architecting Cloud Storage with Intel 3D XPoint Technology and Intel 3D NAND SSDs

Hyper-converged infrastructure with Proxmox VE virtualization platform and integrated Ceph Storage.

Accelerate block service built on Ceph via SPDK Ziye Yang Intel

Is Open Source good enough? A deep study of Swift and Ceph performance. 11/2013

Ceph Optimizations for NVMe

MySQL and Ceph. MySQL in the Cloud Head-to-Head Performance Lab. 1:20pm 2:10pm Room :20pm 3:10pm Room 203

Scott Oaks, Oracle Sunil Raghavan, Intel Daniel Verkamp, Intel 03-Oct :45 p.m. - 4:30 p.m. Moscone West - Room 3020

Introducing SUSE Enterprise Storage 5

Extremely Fast Distributed Storage for Cloud Service Providers

Accelerate Ceph By SPDK on AArch64

Enterprise Ceph: Everyway, your way! Amit Dell Kyle Red Hat Red Hat Summit June 2016

Accelerating NVMe-oF* for VMs with the Storage Performance Development Kit

Ziye Yang. NPG, DCG, Intel

WHAT S NEW IN LUMINOUS AND BEYOND. Douglas Fuller Red Hat

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme

Benefits of 25, 40, and 50GbE Networks for Ceph and Hyper- Converged Infrastructure John F. Kim Mellanox Technologies

NVMe SSDs with Persistent Memory Regions

SPDK China Summit Ziye Yang. Senior Software Engineer. Network Platforms Group, Intel Corporation

A fields' Introduction to SUSE Enterprise Storage TUT91098

All-Flash High-Performance SAN/NAS Solutions for Virtualization & OLTP

Identifying Performance Bottlenecks with Real- World Applications and Flash-Based Storage

클라우드스토리지구축을 위한 ceph 설치및설정

Yuan Zhou Chendi Xue Jian Zhang 02/2017

A Gentle Introduction to Ceph

Ceph Intro & Architectural Overview. Abbas Bangash Intercloud Systems

INTRODUCTION TO CEPH. Orit Wasserman Red Hat August Penguin 2017

SUSE OpenStack Cloud Production Deployment Architecture. Guide. Solution Guide Cloud Computing.

Hyper-converged storage for Oracle RAC based on NVMe SSDs and standard x86 servers

Deep Learning Performance and Cost Evaluation

The Comparison of Ceph and Commercial Server SAN. Yuting Wu AWcloud

Dell PowerEdge R730xd Servers with Samsung SM1715 NVMe Drives Powers the Aerospike Fraud Prevention Benchmark

Innovations in Non-Volatile Memory 3D NAND and its Implications May 2016 Rob Peglar, VP Advanced Storage,

Software Defined Storage at the Speed of Flash. PRESENTATION TITLE GOES HERE Carlos Carrero Rajagopal Vaideeswaran Symantec

A New Key-Value Data Store For Heterogeneous Storage Architecture

Block Storage Service: Status and Performance

DELL EMC ISILON F800 AND H600 I/O PERFORMANCE

Andrzej Jakowski, Armoun Forghan. Apr 2017 Santa Clara, CA

Changpeng Liu. Senior Storage Software Engineer. Intel Data Center Group

NVMFS: A New File System Designed Specifically to Take Advantage of Nonvolatile Memory

RED HAT CEPH STORAGE ON THE INFINIFLASH ALL-FLASH STORAGE SYSTEM FROM SANDISK

Micron and Hortonworks Power Advanced Big Data Solutions

Benchmark of a Cubieboard cluster

Evaluation of the Chelsio T580-CR iscsi Offload adapter

Samba and Ceph. Release the Kraken! David Disseldorp

Deep Learning Performance and Cost Evaluation

Guide. v5.5 Implementation. Guide. HPE Apollo 4510 Gen10 Series Servers. Implementation Guide. Written by: David Byte, SUSE.

Architecting For Availability, Performance & Networking With ScaleIO

INTEL NEXT GENERATION TECHNOLOGY - POWERING NEW PERFORMANCE LEVELS

It s all about the memory and storage

Re- I m a g i n i n g D a t a C e n t e r S t o r a g e a n d M e m o r y

All-Flash High-Performance SAN/NAS Solutions for Virtualization & OLTP

Linux Storage System Bottleneck Exploration

Lenovo - Excelero NVMesh Reference Architecture

Accelerate Applications Using EqualLogic Arrays with directcache

Emulex LPe16000B 16Gb Fibre Channel HBA Evaluation

The Impact of SSD Selection on SQL Server Performance. Solution Brief. Understanding the differences in NVMe and SATA SSD throughput

Hardware Based Compression in Ceph OSD with BTRFS

March NVM Solutions Group

THE STORAGE PERFORMANCE DEVELOPMENT KIT AND NVME-OF

Designing Next Generation FS for NVMe and NVMe-oF

Improving Ceph Performance while Reducing Costs

Ampere emag Processor Optimized for the Cloud Kumar Sankaran Vice President, Software & Platforms, Ampere

Deploying Software Defined Storage for the Enterprise with Ceph. PRESENTATION TITLE GOES HERE Paul von Stamwitz Fujitsu

Benchmarking Enterprise SSDs

ROCK INK PAPER COMPUTER

Intel SSD Data center evolution

Persistent Memory. High Speed and Low Latency. White Paper M-WP006

Samsung Z-SSD and ScyllaDB: Delivering Low Latency and Multi-Terabyte Capacity in a Persistent Database

Ultimate Workstation Performance

An NVMe-based Offload Engine for Storage Acceleration Sean Gibb, Eideticom Stephen Bates, Raithlin

Using FPGAs to accelerate NVMe-oF based Storage Networks

Samsung Z-SSD SZ985. Ultra-low Latency SSD for Enterprise and Data Centers. Brochure

Dongjun Shin Samsung Electronics

Next-Generation NVMe-Native Parallel Filesystem for Accelerating HPC Workloads

Hewlett Packard Enterprise HPE GEN10 PERSISTENT MEMORY PERFORMANCE THROUGH PERSISTENCE

All-Flash High-Performance SAN/NAS Solutions for Virtualization & OLTP

NVMe Over Fabrics: Scaling Up With The Storage Performance Development Kit

Samsung SSD PM863 and SM863 for Data Centers. Groundbreaking SSDs that raise the bar on satisfying big data demands

How Flash-Based Storage Performs on Real Applications Session 102-C

Colin Cunningham, Intel Kumaran Siva, Intel Sandeep Mahajan, Oracle 03-Oct :45 p.m. - 5:30 p.m. Moscone West - Room 3020

The CephFS Gateways Samba and NFS-Ganesha. David Disseldorp Supriti Singh

Comparing UFS and NVMe Storage Stack and System-Level Performance in Embedded Systems

LATEST INTEL TECHNOLOGIES POWER NEW PERFORMANCE LEVELS ON VMWARE VSAN

VMmark 3.0 Results. Number of Hosts: 4 Uniform Hosts [yes/no]: Yes Total sockets/cores/threads in test: 8/128/256

MongoDB on Kaminario K2

The QM2 PCIe Expansion Card Gives Your NAS a Performance Boost! QM2-2S QM2-2P QM2-2S10G1T QM2-2P10G1T

Benchmarking Ceph for Real World Scenarios

SUPERMICRO, VEXATA AND INTEL ENABLING NEW LEVELS PERFORMANCE AND EFFICIENCY FOR REAL-TIME DATA ANALYTICS FOR SQL DATA WAREHOUSE DEPLOYMENTS

Transcription:

All-NVMe Performance Deep Dive Into Ceph + Sneak Preview of QLC + NVMe Ceph Ryan Meredith Sr. Manager, Storage Solutions Engineering 2018 Micron Technology, Inc. All rights reserved. Information, products, and/or specifications are subject to change without notice. All information is provided on an AS IS basis without warranties of any kind. Statements regarding products, including regarding their features, availability, functionality, or compatibility, are provided for informational purposes only and do not modify the warranty, if any, applicable to any product. Drawings may not be to scale. Micron, the Micron logo, and all other Micron trademarks are the property of Micron Technology, Inc. All other trademarks are the property of their respective owners.

Micron Storage Solutions Engineering Austin, TX Big Fancy Lab Real-world application performance testing using Micron Storage & Memory Ceph, vsan, Storage Spaces Direct Hadoop, Spark Oracle, MSSQL, MySQL Cassandra, MongoDB 2

Micron Reference Architectures Micron SSDs Partner ISV SW Pre-Validated Solution Partner OEM HW Micron Memory Micron Storage Solutions Center Micron Reference Architecture DIY Solution 3

4 What is Ceph?

5 Ceph is the Primary Storage for OpenStack

What is Ceph? Software Defined Storage Uses off the shelf servers & drives Open Source Dev team hired by Red Hat Red Hat and other Linux vendors sell supported Ceph versions Scale Out Add storage nodes to increase space & compute Uses crc32c + replication or erasure coding for storage data protection 6

What is Ceph? Supports object, block, and file storage Object: Native Rados API / Amazon S3 / Swift Block: Rados Block Driver Can present an image to a client as a standard block device. Any Linux server with librbd installed can use as persistent storage Tested using standard storage benchmark tools like FIO File POSIX compliant file system Mount directly on Linux host Single namespace 7

What is Ceph? RADOS: Reliable Autonomic Distributed Object Storage LIBRADOS: Object API RADOSGW: S3, Swift, API Gateway LIBRBD: Block Storage OSD: Object Store Daemon Process that manages storage Usually 1 to 2 OSDs per Drive MON: Monitor Node Maintains CRUSH map 3 Mons minimum for failover 8

9 Ceph Luminous All-NVMe Performance

Hardware Configuration Micron + Red Hat + Supermicro ALL-NVMe Ceph Storage Nodes (x4) Supermicro SYS-1029U-TN10RT+ 2x Intel 8168 24 core Xeon, 2.7Ghz Base / 3.7Ghz Turbo 384GB Micron High Quality Excellently Awesome DDR4-2666 DRAM (12x 32GB) 2x Mellanox ConnectX-5 100GbE 2-port NICs 1 NIC for client network / 1 NIC for storage network 10x Micron 6.4TB 9200MAX NVMe SSD 3 Drive Writes per Day Endurance 770k 4KB Random Read IOPs / 270k 4KB Random Write IOPs 3.15 GB/s Sequential Read / 2.3 GB/s Sequential Write 64TB per Storage Node / 256TB in 4 node RA as tested 10

Hardware Configuration Micron + Red Hat + Supermicro ALL-NVMe Ceph 11 Monitor Nodes (x3) Supermicro SYS-1028U-TNRT+ (1U) 128 GB DRAM 50 GbE Mellanox ConnectX-4 Network 2x Supermicro SSE-C3632SR, 100GbE 32-Port Switches 1 switch for client network / 1 switch for storage network Load Generation Servers (Clients) 10x Supermicro SYS-2028U (2U) 2x Intel 2690v4 256GB RAM 50 GbE Mellanox ConnectX-4

Software Configuration Micron + Red Hat + Supermicro ALL-NVMe Ceph Storage + Monitor Nodes + Clients Ceph Luminous 12.2.4 Red Hat Enterprise Linux 7.4 Mellanox OFED Driver 4.1 Switch OS Cumulus Linux 3.4.1 Deployment Tool Ceph-Ansible 12

Performance Testing Methodology Micron + Red Hat + Supermicro ALL-NVMe Ceph 2 OSDs per NVMe Drive / 80 OSDs total Ceph Storage Pool Config 2x Replication: 8192 PG s, 100x 75GB RBD Images = 7.5TB data x 2 FIO RBD for Block Tests (4KB Block size) Writes: FIO at queue depth 32 while scaling up # of client FIO processes Reads: FIO against all 100 RBD Images, scaling up QD RADOS Bench for Object Tests (4MB Objects) Writes: RADOS Bench @ threads 16, scaling up # of clients Reads: RADOS Bench on 10 clients, scaling up # of threads 10-minute test runs x 3 for recorded average performance results (5 min ramp up on FIO) 13

Ceph Bluestore & NVMe The Tune-Pocalypse Ceph Luminous Community 12.2.4 Tested using Bluestore, a newer storage engine for Ceph Default RocksDB tuning for Bluestore in Ceph Great for large object Bad for 4KB random on NVMe Worked w/ Mark Nelson & Red Hat team to tune RocksDB for good 4KB random performance 14

Bluestore & NVMe The Tune- Pocalypse Bluestore OSD Tuning for 4KB Random Writes: Set high max_write_buffer_number & min_write_buffer_number_to_merge Set Low write_buffer_size [osd] bluestore_cache_kv_max = 200G bluestore_cache_kv_ratio = 0.2 bluestore_cache_meta_ratio = 0.8 bluestore_cache_size_ssd = 18G osd_min_pg_log_entries = 10 osd_max_pg_log_entries = 10 osd_pg_log_dups_tracked = 10 osd_pg_log_trim_min = 10 bluestore_rocksdb_options = compression=knocompression,max_write_buffer_number=64,min_wr ite_buffer_number_to_merge=32,recycle_log_file_num=64,compac tion_style=kcompactionstylelevel,write_buffer_size=4mb,targe t_file_size_base=4mb,max_background_compactions=64,level0_fi le_num_compaction_trigger=64,level0_slowdown_writes_trigger= 128,level0_stop_writes_trigger=256,max_bytes_for_level_base= 6GB,compaction_threads=32,flusher_threads=8,compaction_reada head_size=2mb 15

4KB Random Write IOPs Average Latency (ms) Ceph Luminous 12.2: 4KB Random Read Micron + Red Hat + Supermicro ALL-NVMe Ceph 2,000 K 1,500 K 4KB Random Read: IOPs + Average Latency 2.0 1.8 1.6 1.4 4KB Random Reads: Queue Depth 32 2.15 Million @ 1.5ms Avg. Latency Tests become CPU Limited around Queue Depth 16 1,000 K 500 K 1.2 1.0 0.8 0.6 0.4 0.2 0 QD 1 QD 4 QD 8 QD 16 QD 32 0.0 Bluestore 4KB Random Read IOPs Bluestore Avg. Latency (ms) 16

4KB Random Write IOPs 99.99% Latency (ms) Ceph Luminous 12.2: 4KB Random Read Micron + Red Hat + Supermicro ALL-NVMe Ceph 2,000 K 1,500 K 4KB Random Read: IOPs + Tail Latency 350 300 250 4KB Random Reads: Queue Depth 32 Bluestore Tail Latency: 251 ms Tail latency spikes as tests become CPU limited 1,000 K 500 K 200 150 100 50 0 QD 1 QD 4 QD 8 QD 16 QD 32 - Bluestore 4KB Random Read IOPs Bluestore 99.99% Latency (ms) 17

4KB Random Write IOPs Average Latency (ms) Ceph Luminous 12.2: 4KB Random Write Micron + Red Hat + Supermicro ALL-NVMe Ceph 450 K 400 K 350 K 300 K 4KB Random Write: IOPs + Average Latency 10 9 8 7 4KB Random Writes: 100 Clients 443k IOPs @ 7ms 250 K 200 K 150 K 6 5 4 3 100 K 2 50 K 1 0 20 FIO Clients 40 FIO Clients 60 FIO Clients 80 FIO Clients 100 FIO Clients - Bluestore 4KB Random Write IOPs Bluestore Avg. Latency (ms) 18

4KB Random Write IOPs 99.99% Latency (ms) Ceph Luminous 12.2: 4KB Random Write Micron + Red Hat + Supermicro ALL-NVMe Ceph 450 K 400 K 350 K 300 K 4KB Random Write: IOPs + Tail Latency 100 90 80 70 4KB Random Writes: 100 Clients Tail Latency: 89ms 250 K 200 K 150 K 60 50 40 30 100 K 20 50 K 10 0 20 FIO Clients 40 FIO Clients 60 FIO Clients 80 FIO Clients 100 FIO Clients - Bluestore 4KB Random Write IOPs Bluestore 99.99% Latency (ms) 19

Throughput (GB/s) Average Latency (ms) Ceph Luminous 12.2: 4MB Object Read Micron + Red Hat + Supermicro ALL-NVMe Ceph 50 45 40 35 4MB Object Read: Throughput + Average Latency 100 90 80 70 4KB Random Writes: 16 Threads: 42.9 GB/s @ 29ms 30 25 20 60 50 40 15 30 10 20 5 10 0 4 Threads 8 Threads 16 Threads 32 Threads 0 Bluestore Throughput (GB/s) Bluestore Latency (ms) 20

Throughput (GB/s) Average Latency (ms) Ceph Luminous 12.2: 4MB Object Write Micron + Red Hat + Supermicro ALL-NVMe Ceph 20 18 16 14 4MB Object Write: Throughput + Average Latency 40 35 30 4MB Object Writes: 6 Clients: 18.4 GB/s @ 21ms 12 10 8 25 20 15 6 10 4 2 5 0 2 Clients 4 Clients 6 Clients 8 Clients 10 Clients 0 Bluestore Throughput (GB/s) Bluestore Latency (ms) 21

Performance Comparison: 4KB Random Block 2017 Micron RA vs. 2017 Competitor vs. 2018 Micron + Bluestore 4KB Random IO Performance / Node 539k 4KB Random IOPs 4KB Random Reads 1.75X IOPs 287k 360k 4KB Random Writes 1.9X IOPs 62k 76k 111k 4KB Random Read 4KB Random Write 2017 Micron Ceph RA Filestore RHCS 2.1/9100MAX 2017 Competitor RA Bluestore Luminous 12.0/P4800X+P4500 NVMe 2018 Micron Ceph RA Bluestore Luminous 12.2/9200MAX 22

Performance Comparison: 4MB Object 2017 Micron RA vs. 2018 Micron RA + Bluestore 4MB Object Performance: RADOS Bench 43GB/s 4MB Object Throughput 1.95X Read Throughput 3.9X Write Throughput 22GB/s 18GB/s 4.6GB/s 4MB Object Read Throughput 4MB Object Write Throughput 2017 Micron Ceph RA Filestore RHCS 2.1/9100MAX 2018 Micron Ceph RA Bluestore Luminous 12.2/9200MAX 23

24 Ceph Luminous QLC + NVMe Performance

Hardware Configuration QLC + NVMe Ceph Storage Nodes (x3) Supermicro SYS-1029U-TR25M 2x Intel 6148 20-core Xeon, 2.4Ghz Base / 3.7Ghz Turbo 384GB Micron High Quality Excellently Awesome DDR4-2666 DRAM (12x 32GB) 25GbE Network 1 port for client network / 1 port for storage network 8x Micron 7.8TB 5210 ION QLC SSD ~64TB per Storage Node / 192TB in 3 node cluster as tested 2x Micron 1.6TB 9200MAX NVMe SSD Used for WAL + RocksDB instances (Write coalescing + acceleration) 25

Performance Testing Methodology Micron + Red Hat + Supermicro ALL-NVMe Ceph 2 OSDs per 5210 / 48 OSDs total Ceph Storage Pool Config 2x Replication: 8192 PG s, 50x 115GB RBD Images = 5.8TB data x 2 FIO RBD for Block Tests (16KB Block Size) Mixed Workloads + 100% reads: FIO against all 50 RBD Images, scaling up QD RADOS Bench for Object Tests (1MB Object Size) Reads & Writes: RADOS Bench on 10 clients, scaling up # of threads 10-minute test runs x 3 for recorded average performance results (5 min ramp up on FIO) 26

Read Throughput (GB/s) Average Latency (ms) Ceph Luminous: 16KB 100% Read 9 8 16KB Read Performance Ceph Luminous 12.2.5 Micron 5210 + 9200MAX 3.5 3.0 Micron QLC + NVMe 7 2.5 6 16KB Reads: 5 2.0 8+ GB/s at QD 8 Network limited on 3x 25GbE 4 1.5 Sub-millisecond average latency 3 2 1.0 1 0.5 0 IO Depth 4 IO Depth 8 IO Depth 16 IO Depth 32-16KB Read Throughput (GB/s) 16KB Read Avg. Latency (ms) 27

Throughput (GB/s) Average Latency (ms) Ceph Luminous: 16KB 90%/10% R/W 1.8 1.6 16KB 90%/10% Read/Write Performance Ceph Luminous 12.2.5 Micron 5210 + 9200MAX 30 Micron QLC + NVMe 1.4 25 1.2 20 16KB 90%/10% Reads/Writes: 1.0 15 1.7 GB/s at QD 16 Drive Limited 0.8 0.6 10 5.8ms Average Read Latency 16.7ms Average Write Latency 0.4 0.2 - IO Depth 4 IO Depth 8 IO Depth 16 IO Depth 32 5 0 16KB 90%/10% R/W Throughput (GB/s) Read Avg. Latency (ms) Write Avg. Latency (ms) 28

Throughput (GB/s) Average Latency (ms) Ceph Luminous: 16KB 70%/30% R/W 1.2 16KB 70%/30% Read/Write Performance Ceph Luminous 12.2.5 Micron 5210 + 9200MAX 30 Micron QLC + NVMe 1.0 25 0.8 20 16KB 70%/30% Reads/Writes: 0.6 15 1.1 GB/s at QD 32 Drive Limited 0.4 10 11ms Average Read Latency 28ms Average Write Latency 0.2 - IO Depth 4 IO Depth 8 IO Depth 16 IO Depth 32 5 0 16KB 70%/30% R/W Throughput (GB/s) Read Avg. Latency (ms) Write Avg. Latency (ms) 29

Throughput (GB/s) Average Latency (ms) Ceph Luminous 12.2: 1MB Object Read Micron QLC + NVMe 10.0 9.0 8.0 7.0 1MB Object Read Performance Ceph Luminous 12.2.5 Micron 5210 + 9200MAX 45 40 35 30 1MB Object Reads: 6.0 25 8+ GB/s at 16 Threads Network limited on 3x 25GbE 18ms average latency 5.0 4.0 3.0 20 15 2.0 10 1.0 5-4 Threads 8 Threads 16 Threads 32 Threads - Read Bandwidth (mb/s) Read Latency (ms) 30

Throughput (GB/s) Average Latency (ms) Ceph Luminous 12.2: 1MB Object Write Micron QLC + NVMe 3.5 3.0 2.5 1MB Object Write Performance Ceph Luminous 12.2.5 Micron 5210 + 9200MAX 80 70 60 1MB Object Writes: 2.0 50 3+ GB/s at 12 Threads Drive Limited 40ms average latency 1.5 1.0 40 30 20 0.5 10-4 Threads 8 Threads 12 Threads 16 Threads 0 Write Bandwidth (MB/s) Write Latency (ms) 31

Would you like to know more? Today 8/7 1:50pm Micron Keynote Derek Dicker 3:40pm QLC is the Best Way to Replace Enterprise HDDs 3:40pm Flash-Memory Based Architectures 3:40pm New Directions in Security 4:55pm Meeting the Storage Needs of 5G Networks 4:55pm Encryption for Data Protection 7:00pm Micron Mixer in the Terra Courtyard Wednesday 8/8 8:30am New Flexible Form Factors for Enterprise SSDs 32 Thursday 8/9 2:10pm The Next Great Breakthrough in NAND Flash

Would you like to know more? Micron NVMe Reference Architecture: https://www.micron.com/~/media/documents/products/technicalmarketing-brief/micron_9200_ceph_3,-d-,0_reference_architecture.pdf Micron Storage Blogs: Ceph https://www.micron.com/about/blogs/authors/ryan-meredith 33

34 Thanks All

Micron Ceph Appendix ALL-NVMe Ceph RA: Bluestore Ceph.conf [global] auth client required = none auth cluster required = none auth service required = none auth supported = none mon host = xxxxxxx osd objectstore = bluestore cephx require signatures = False cephx sign messages = False mon_allow_pool_delete = true mon_max_pg_per_osd = 800 mon pg warn max per osd = 800 ms_crc_header = False ms_crc_data = False ms type = async perf = True rocksdb_perf = True osd_pool_default_size = 2 [mon] mon_max_pool_pg_num = 166496 mon_osd_max_split_count = 10000 [client] rbd_cache = false rbd_cache_writethrough_until_flush = false [osd] bluestore_csum_type = none bluestore_cache_kv_max = 200G bluestore_cache_kv_ratio = 0.2 bluestore_cache_meta_ratio = 0.8 bluestore_cache_size_ssd = 18G bluestore_extent_map_shard_min_size = 50 bluestore_extent_map_shard_max_size = 200 bluestore_extent_map_shard_target_size = 100 osd_min_pg_log_entries = 10 osd_max_pg_log_entries = 10 osd_pg_log_dups_tracked = 10 osd_pg_log_trim_min = 10 bluestore_rocksdb_options = compression=knocompression,max_write_buffer_ number=64,min_write_buffer_number_to_merge=3 2,recycle_log_file_num=64,compaction_style=k CompactionStyleLevel,write_buffer_size=4MB,t arget_file_size_base=4mb,max_background_comp actions=64,level0_file_num_compaction_trigge r=64,level0_slowdown_writes_trigger=128,leve l0_stop_writes_trigger=256,max_bytes_for_lev el_base=6gb,compaction_threads=32,flusher_th reads=8,compaction_readahead_size=2mb 35