MySQL and Ceph MySQL in the Cloud Head-to-Head Performance Lab 1:20pm 2:10pm Room 203 2:20pm 3:10pm Room 203
WHOIS Brent Compton and Kyle Bader Storage Solution Architectures Red Hat Yves Trudeau Principal Architect Percona
AGENDA MySQL on Ceph Why MySQL on Ceph Ceph Architecture Tuning: MySQL on Ceph HW Architectural Considerations MySQL in the Cloud Head-to-Head Performance Lab MySQL on Ceph vs. AWS Head-to-head: Performance Head-to-head: Price/performance IOPS performance nodes for Ceph
MySQL on Ceph vs. AWS
MySQL ON CEPH STORAGE CLOUD OPS EFFICIENCY Shared, elastic storage pool Dynamic DB placement Flexible volume resizing Live instance migration Backup to object pool Read replicas via copy-on-write snapshots
MYSQL-ON-CEPH PRIVATE CLOUD FIDELITY TO A MYSQL-ON-AWS EXPERIENCE Hybrid cloud requires public/private cloud commonalities Developers want DevOps consistency Elastic block storage, Ceph RBD vs. AWS EBS Elastic object storage, Ceph RGW vs. AWS S3 Users want deterministic performance
HEAD-TO-HEAD PERFORMANCE 30 IOPS/GB: AWS EBS P-IOPS TARGET
HEAD-TO-HEAD LAB TEST ENVIRONMENTS EC2 r3.2xlarge and m4.4xlarge EBS Provisioned IOPS and GPSSD Percona Server Supermicro servers Red Hat Ceph Storage RBD Percona Server
SUPERMICRO CEPH LAB ENVIRONMENT Shared 10G SFP+ Networking Monitor Nodes OSD Storage Server Systems 5x SuperStorage SSG-6028R-OSDXXX Dual Intel Xeon E5-2650v3 (10x core) 32GB SDRAM DDR3 2x 80GB boot drives 4x 800GB Intel DC P3700 (hot-swap U.2 NVMe) 1x dual port 10GbE network adaptors AOC-STGN-i2S 8x Seagate 6TB 7200 RPM SAS (unused in this lab) Mellanox 40GbE network adaptor(unused in this lab) MySQL Client Systems 12x Super Server 2UTwin2 nodes Dual Intel Xeon E5-2670v2 (cpuset limited to 8 or 16 vcpus) 64GB SDRAM DDR3 5x OSD Nodes 12x Client Nodes Storage Server Software: Red Hat Ceph Storage 1.3.2 Red Hat Enterprise Linux 7.2 Percona Server
SYSBENCH BASELINE ON AWS EC2 + EBS 9000 8000 7996 7956 7000 6000 5000 4000 100% Read 100% Write 3000 2000 1680 1687 1000 950 267 0 P-IOPS m4.4xl P-IOPS r3.2xl GP-SSD r3.2xl
SYSBENCH REQUESTS PER MYSQL INSTANCE 80000 70000 67144 60000 50000 40000 30000 40031 100% Read 100% write 20000 20053 70/30 RW 10000 0 7996 1680 P-IOPS m4.4xl 5677 Ceph cluster 1x "m4.4xl" (14% capacity) 1258 4752 Ceph cluster 6x "m4.4xl" (87% capacity)
CONVERTING SYSBENCH REQUESTS TO IOPS READ PATH SYSBENCH READ X% FROM INNODB BUFFER POOL IOPS = (READ REQUESTS X%)
CONVERTING SYSBENCH REQUESTS TO IOPS WRITE PATH SYSBENCH WRITE 1X READ 1X WRITE X% FROM INNODB BUFFER POOL LOG, DOUBLE WRITE BUFFER IOPS = (READ REQ X%) IOPS = (WRITE REQ * 2.3)
AWS IOPS/GB BASELINE: ~ AS ADVERTISED! 35,0 30,0 30,0 29,8 25,0 25,6 25,7 20,0 15,0 100% Read 100% Write 10,0 5,0 3,6 4,1 0,0 P-IOPS m4.4xl P-IOPS r3.2xl GP-SSD r3.2xl
IOPS/GB PER MYSQL INSTANCE 300 250 252 200 150 100 78 150 MySQL IOPS/GB Reads MySQL IOPS/GB Writes 50 30 26 19 0 P-IOPS m4.4xl Ceph cluster 1x "m4.4xl" (14% capacity) Ceph cluster 6x "m4.4xl" (87% capacity)
FOCUSING ON WRITE IOPS/GB AWS THROTTLE WATERMARK FOR DETERMINISTIC PERFORMANCE 90 80 78 70 60 50 40 30 26 20 19 10 0 P-IOPS m4.4xl Ceph cluster 1x "m4.4xl" (14% capacity) Ceph cluster 6x "m4.4xl" (87% capacity)
IOPS/GB EFFECT OF CEPH CLUSTER LOADING ON IOPS/GB 160 140 134 120 100 80 78 72 100% Write 60 70/30 RW 40 37 37 36 20 25 19 0 Ceph cluster (14% capacity) Ceph cluster (36% capacity) Ceph cluster (72% capacity) Ceph cluster (87% capacity)
A NOTE ON WRITE AMPLIFICATION MYSQL ON CEPH WRITE PATH MYSQL INSERT INNODB DOUBLE WRITE BUFFER CEPH REPLICATION OSD JOURNALING X2 X2 X2
HEAD-TO-HEAD PERFORMANCE 30 IOPS/GB: AWS EBS P-IOPS TARGET 25 IOPS/GB: CEPH 72% CLUSTER CAPACITY (WRITES) 78 IOPS/GB: CEPH 14% CLUSTER CAPACITY (WRITES)
HEAD-TO-HEAD PRICE/PERFORMANCE $2.50: TARGET AWS EBS P-IOPS STORAGE PER IOP
IOPS/GB (Sysbench Write) IOPS/GB ON VARIOUS CONFIGS 90 80 78 70 60 50 40 30 20 31 18 18 10 - AWS EBS Provisioned-IOPS Ceph on Supermicro FatTwin 72% Capacity Ceph on Supermicro MicroCloud 87% Capacity Ceph on Supermicro MicroCloud 14% Capacity
Storage $/IOP (Sysbench Write) $/STORAGE-IOP ON THE SAME CONFIGS $3,00 $2,50 $2,40 $2,00 $1,50 $1,00 $0,80 $0,78 $1,06 $0,50 $- AWS EBS Provisioned-IOPS Ceph on Supermicro FatTwin 72% Capacity Ceph on Supermicro MicroCloud 87% Capacity Ceph on Supermicro MicroCloud 14% Capacity
HEAD-TO-HEAD PRICE/PERFORMANCE $2.50: TARGET AWS P-IOPS $/IOP (EBS ONLY) $0.78: CEPH ON SUPERMICRO MICRO CLOUD CLUSTER
IOPS PERFORMANCE NODES FOR CEPH
ARCHITECTURAL CONSIDERATIONS UNDERSTANDING THE WORKLOAD Traditional Ceph Workload $/GB PBs Unstructured data MB/sec MySQL Ceph Workload $/IOP TBs Structured data IOPS
ARCHITECTURAL CONSIDERATIONS FUNDAMENTALLY DIFFERENT DESIGN Traditional Ceph Workload 50-300+ TB per server Magnetic Media (HDD) Low CPU-core:OSD ratio 10GbE->40GbE MySQL Ceph Workload < 10 TB per server Flash (SSD -> NVMe) High CPU-core:OSD ratio 10GbE
IOPS/GB CONSIDERING CORE-TO-FLASH RATIO 40 35 34 34 36 30 25 20 18 18 19 15 100% Write 70/30 RW 10 5 6 8 0 Ceph cluster 80 cores 8 NVMe (87% capacity) Ceph cluster 40 cores 4 NVMe (87% capacity) Ceph cluster 80 cores 4 NVMe (87% capacity) Ceph cluster 80 cores 12 NVMe (84% capacity)
SUPERMICRO MICRO CLOUD CEPH MYSQL PERFORMANCE SKU 1x CPU + 1x NVMe + 1x SFP + + 8x Nodes in 3U chassis Model: SYS-5038MR-OSDXXXP Per Node Configuration: CPU: Single Intel Xeon E5-2630 v4 Memory: 32GB NVMe Storage: Single 800GB Intel P3700 Networking: 1x dual-port 10G SFP+
SEE US AT PERCONA LIVE! Hands on Test Drive: MySQL on Ceph April 18, 1:30-4:30 MySQL on Ceph April 19, 1:20-2:10 MySQL in the Cloud: Head-to-Head Performance April 19, 2:20-3:10 Running MySQL Virtualized on Ceph: Which Hypervisor? April 20, 3:30-4:20
THANK YOU!