What s new in Percona Xtradb Cluster 5.6 Jay Janssen Lead Consultant February 5th, 2014
Overview PXC 5.6 is the aggregation of Percona Server 5.6 Codership MySQL 5.6 patches Galera 3.x Agenda Major new features Async replication and PXC 5.6 5.6 features that play nicely (or not) with Galera Future features Questions
What is Percona XtraDB Cluster? Read and Write anywhere Innodb cluster Highly available quorum-based failover Synchronous replication for data consistency Cluster responsible for node state PXC product 2 years in GA with 5.5, 13 releases total, over 150k downloads Lots of development effort in the test suite Easy to migrate from and integrated with MySQL async replication
Galera Replication Enhancements
Replication overhead for Keys Keys track all Schemas, Tables, and PK/UKs/FKs of rows being modified in the writeset 250k row inserts ala sysbench prepare Replicated MBytes 50 37.5 25 12.5 0 5.25 1.91 45.25 45.25 Galera 2 Galera 3 Keys RBR
Keys are now hashed Hashed keys reduces writeset sizes Especially for large key columns! Single INSERT writeset size Bytes 400 300 200 100 174 58 47 71 223 223 223 223 RBR Keys 0 G2 INT G2 CHAR(128) G3 FLAT8 G3 FLAT16
Improved Memory usage (certification index) 700,000 525,000 mysqld RSS 646,444 Killobytes 350,000 384,252 175,000 168,684 170,664 197,460 198,428 0 wsrep_provider=none Galera 2 Galera 3 Startup ((innodb_buffer_pool_populate=on) After Large Trx
Overall improvements Writesets have a new format faster certification less memory checksums Writeset keys stored as hashes FLAT8/FLAT16 Socket Checksums CRC32-C (hw accel where supported)
WAN Segments
Multiple Datacenter Cluster Datacenter 1 Datacenter 3 Datacenter 2
Replication without Segments Datacenter 1 Datacenter 3 Client commits here Datacenter 2
Cluster segments Set gmcast.segment distinct per-location All nodes in colo1 have gmcast.segment=1 All nodes in colo2 have gmcast.segment=2 etc. Benefits Replication traffic is minimized between segments Donor selection prefers local segment
Replication with Segments gmcast.segment=1 Datacenter 1 gmcast.segment=2 Datacenter 3 gmcast.segment=3 Client commits here Datacenter 2
Donor Selection with Segments Datacenter 1 gmcast.segment=1 Datacenter 3 gmcast.segment=2 gmcast.segment=3 Datacenter 2 prefers local nodes Joiner
Asynchronous Replication into the cluster
Any node can be a slave async master node1 node3 node2
Galera 2 behavior Galera <= 2.x or wsrep_preordered=off slave node node2 node3 async cycle per replicated transaction begin apply commit request certify commit finalized replicate certify begin certify apply begin commit finalized apply commit finalized
Galera 3 with wsrep_preordered=on Galera 3+ and wsrep_preordered=on slave node node2 node3 cycle per replicated transaction async certify* begin apply replicate certify* begin apply certify* begin commit finalized commit finalized apply commit finalized
When to use wsrep_preordered wsrep_preordered=on Better performance Does not allow for conflicts with any other writes No parallel apply for these transactions Good for Master / Slave to Cluster migration wsrep_preordered=off Detects conflicts with any other writes Allows parallel apply Good for permanent Cluster-is-a-slave
Asynchronous Replication out of the cluster
5.6 async GTID integration Every node with log-bin will have the same GTID for the same transaction node1 node1 node3 node2 failed node2 async slave async slave CHANGE MASTER TO MASTER_HOST='node2', MASTER_AUTO_POSITION=1
General 5.6 Improvements
Can I use <5.6 feature> in PXC? Every new feature works fine within a single node E.g., Innodb Full Text Indexes, Partitioning, Optimizer enhancements, Performance Schema, etc. Don t expect automatic cluster support E.g., Online DDL improvements memcached server https://bugs.launchpad.net/percona-xtradbcluster/+bug/1254126
Minimal RBR images Pre-5.6 RBR is full row images for any row change 5.6 adds binlog_row_image=minimal (not default) Seems fine with PXC RBR image is black-box to Galera 1 Minute Sysbench Update Test (1 col out of 3 modified) MB Replicated 70 52.5 35 17.5 0 62.3 full 13.4 minimal
Upgrading a 5.5 cluster to 5.6
The easy way Take the downtime and upgrade it all at once Check for my.cnf settings that are not 5.6 compatible Start each node with wsrep_provider=none Run mysql_upgrade Shutdown again Bootstrap normally
Rolling upgrade RBR 5.6 -> 5.5 replication is broken For each 5.6 upgraded node Compatibility options with 5.5 must be set Galera socket.checksum=1 Don t SST 5.5 to 5.6! Don t write on these nodes! Before last node, flip application to 5.6 nodes Take down remaining 5.5 node(s) for upgrade Rolling restart to remove 5.5 compat options http://bit.ly/pxc56-rolling-upgrade
Odds and Ends
My Favorite Bug Fixes Adding auto_increment column to existing table doesn t cause inconsistency with auto_increment_control wsrep_local_bf_aborts catches all BF aborts now wsrep_flow_control_sent/received now global counters wsrep_max_ws_size/ws_rows moving towards being properly enforced now*
Future Features Automatic huge transaction fragmentation/ streaming support Non-blocking DDL support Cluster tolerance to inconsistencies Intelligent inconsistency handling (e.g., node voting) Intelligent donor selection (e.g., check gcache) Performance optimizations (i.e., multi-core) Multiple provider support
Questions?
!!! Advanced Rates End March 2 nd at 11:30pm PST, 2014! Special Discount for Webinar Attendees: Use Code WebinarSC to receive 10% off of standard rates (new registrations only)! http:///live/mysqlconference-2014/