MariaDB High Availability. MariaDB Training

Size: px
Start display at page:

Download "MariaDB High Availability. MariaDB Training"

Transcription

1 MariaDB Training

2 Introduction Introduction Complex Scenarios Enterprise Cluster Back-Ups with Cluster MaxScale Installation DRBD Monitoring Overview Semi-Synch Plugin Cluster Configuration MariaDB MaxScale Disk Based Solutions Shared Disk Clustering MariaDB Replication Replication Manager Cluster Schema Changes MaxScale with Replication DRBD Configuration Conclusion

3 Introducing MariaDB Ab Founders from MySQL the Company and Community Funded by Founders, Employees, and Venture Capital Over 100 Employees Several former MySQL Employees and Community Members in over 14 Countries 3

4 Personal Introductions Instructor Name and Background Participants Name and Company MariaDB Experience How You Use MariaDB Needs Related to Course Topics 4

5 Class Schedule & Personal Concerns Starting and Ending Times Planned Breaks On- Site Location of Rest Rooms Smoking Areas Snacks and Drinks LVC Classes Chat with Everyone 5

6 Course Outline HA Overview MariaDB Replication Overview & Installation Complex Scenarios Semi-Synch Plugin MariaDB Replication Manager MariaDB Enterprise Cluster Configuration Schema Changes Back-Ups with MDBE Cluster MariaDB MaxScale Overview Installation Replication & Enterprise Cluster Disk Based Solutions DRBD DRBD Configuration DRBD Monitoring Shared Disk Clustering 6

7 Overview Introduction Complex Scenarios Enterprise Cluster Back-Ups with Cluster MaxScale Installation DRBD Monitoring Overview Semi-Synch Plugin Cluster Configuration MariaDB MaxScale Disk Based Solutions Shared Disk Clustering MariaDB Replication Replication Manager Cluster Schema Changes MaxScale with Replication DRBD Configuration Conclusion

8 High Availability Goals & Concepts Goals Ensure a Degree of Operational Continuity Data should Never be Lost due to a Crash End Users Should Never be Aware of Failures Concepts Remain Operational Despite Unforeseen Problems Requires System Redundancy of Software and Hardware Write Data to Multiple Devices and Locations (e.g., RAID, Replication, Clustering) No Single Point of Failure (SPOF) Fault-Tolerant Design 8

9 Designing for High Availability Determine Level of High Availability Needed Data Loss Acceptable Amount of Time or Number of Transactions User Experience and Expectations Automatic Failover or Manual Switchover Test Scenarios Provide Reasonable Service when Each Component is Down 9

10 Definition of Availability Availability = Up-Time / (Up-Time + Down-Time) 90% 1 Nine 36.5 days per year 99% 2 Nines 3.65 days per year 99.9% 3 Nines 8.76 hours per year 99.99% 4 Nines 52 minutes per year % 5 Nines 5 minutes per year % 6 Nines 31 seconds per year Availability = Mean Time Before Failure / (Mean Time Before Failure + Mean Time To Recovery) 10

11 Terms Switchover Failover Failback Planned Scheduled Replication Unplanned Unscheduled Clustering Uptime Downtime Monitoring Availability Durability Scalability Cost SLA Redundancy 11

12 MariaDB Replication Introduction Complex Scenarios Enterprise Cluster Back-Ups with Cluster MaxScale Installation DRBD Monitoring Overview Semi-Synch Plugin Cluster Configuration MariaDB MaxScale Disk Based Solutions Shared Disk Clustering MariaDB Replication Replication Manager Cluster Schema Changes MaxScale with Replication DRBD Configuration Conclusion

13 Purpose of MariaDB Replication Load Balancing (Scaling SELECT Queries) Move Slow, Heavy Queries to Slave Take Slave Off-Line to Make Back-ups Multiple Data Centers Need Fast Reads Gain Redundancy (High Availability) Fail Over - Promote Quickly a Slave to Master Fail Over Isn t Automatic Requires External Monitoring Minimal Downtime for Upgrades or Schema Changes Apply Changes to a Slave Promote Slave to Master and Redirect Traffic Apply Changes to Master and Switch 13

14 Replication Terrain Slave 1 Master IO Thread mysqld SQL Thread Data Storage mysqld Data Storage IO Thread Relay Log INSERT UPDATE DELETE Dump Thread Binary Log Slave 2/Master 2 Client Threads mysqld Data Storage Slave 2A IO Thread Slave 2B CREATE ALTER DROP Relay Log Binary Log Slave 2C 14

15 MariaDB Replication Factors One Master, Multiple Slaves No true Multi-Master Solution, but Circular Replication Close to Real Time, but Asynchronous Semi-Synchronous Replication Mode Crash-Safe Slaves with Transactional Storage Engines A Slave may also be a Master Set log_slave_updates in Configuration File Apply Optionally Replication Filtering Rules or Storage Engine Changes on Intermediate Slaves 15

16 Replication Threads Master binlog Dump Thread Pushes binlog Events to Slave Visible in SHOW PROCESSLIST as "Binlog Dump" Slave IO Thread Visible in SHOW SLAVE STATUS Requests and receives binlog events from the Master Writes them to the local relay log Slave SQL Thread Visible in SHOW SLAVE STATUS Reads the Relay Log and Executes Queries on Local Data Checks the Query Result Codes Match those Recorded by Master Slave Multiple Execution Threads Multi-Threaded Slave separates events based on Database Names Updates are Applied in Parallel, Not Sequence 16

17 Parallel Replication Replication Process on Slaves Events Received from Master by IO Thread and Queued in Relay Log Each Relay Log Entry is Retrieved by the SQL Thread Each Transaction is Applied to the Slave On Non-Parallel Systems, Application Performed Sequentially by SQL Thread On Parallel Systems, Application Performed in Pool of Separate Replication Worker Threads Documentation on Parallel Replication: 17

18 Topologies Master to Slave Simplest Solution and Used Most Widely Allows Off-Loading of SELECT Traffic to Slave Master1 to MasterN... to Master1 (circular) Servers Replicate in a Circle, with binlog Events Traversing the ring until Originating Server Does Not Alleviate Heavy Write Load Needs Careful setup of server-id and auto_increment_offset, auto_increment_increment settings Master to Slave to Slaves Can build Complex Trees Useful for Replication Rules or Storage Engine Changes 18

19 Master Configuration Enable Binary Log Choose Binary Log Format Set server-id in Configuration File to Unique Value Create Replication User Account on Master GRANT REPLICATION SLAVE ON *.* TO IDENTIFIED BY 'rover123'; Make a Consistent Snapshot of Data on Master mysqldump -p -u admin_backup --master-data --flush-logs \ --all-databases > full-dump.sql 19

20 Slave Configuration Set server-id in Configuration File to Unique Value Add read-only in Configuration File to Prevent Writes Set Optionally Replication Rules Covered Later in Class Restart MariaDB Load Data from Master mysql -p -u root < full-dump.sql Execute START SLAVE on Slave CHANGE MASTER TO MASTER_HOST=' ', MASTER_PORT=3306, MASTER_USER='maria_replicato r', MASTER_PASSWORD='rover123'; Documentation on Slave Options: Documentation on CHANGE MASTER TO: 20

21 Monitoring Replication Check Regularly Status on Master Includes binlog number and position SHOW MASTER STATUS; Check More Often Status of Replication on Slave Documentation on SHOW MASTER STATUS: Documentation on SHOW SLAVE STATUS: SHOW SLAVE STATUS \G Slave_IO_State: Waiting for master to send event Slave_IO_Running: Yes Slave_SQL_Running: Yes Last_Errno: 0 Last_Error: Seconds_Behind_Master:

22 Replication Files Binary Log Files (Master) Master Records Write-Queries to File Rotated when Flushed or Periodically to New Log File File Name Pattern ( ) Relay Log File (Slave) Record of Master binlog Events Rotated when Flushed or Periodically File Name Pattern ( ) Replication Configuration Recorded in master.info (Slave) Name of Relay Log File Recorded in relay-log.info (Slave) Documentation mysqlbinlog: 22

23 Slave Configuration Files master.info relay-log.info maria_replicator rover mariadb-bin

24 Replication File Maintenance & Back-Ups Replication Files Updated & Purged Automatically Don t Edit or Move Manually Include Replication Files when Making Binary Backups Use --raw option with mysqlbinlog to Back-up Binary Log 24

25 Binary Log Format Statement Based (SBR) Original Queries are Replicated [mysqld] binlog-format=mixed Least Data Sent over Wire and Tested for Years Non-Deterministic Statements Executed on Slave Slave Load is Increased vs. RBR Row Based (RBR) - Table Rows are Replicated Only Non-Deterministic Statements Executed on Master Slave Load is Reduced vs. SBR More Data sent over Wire Not Supported by All Engines Mixed (default) - Smart Switching between SBR and RBR Checksum (--binlog-checksum) in Binary and Relay Logs to detect Errors Includes Errors in Memory, Disk, Network and Database Can be Implemented for each Slave Documentation on Binary Log Format: 25

26 Slave Filtering Rules Database Level Exclude Specific Databases (e.g., mysql) CHANGE REPLICATION FILTER REPLICATE_IGNORE_DB = (mysql); SET GLOBAL replicate_ignore_db = 'mysql'; Include Specific Databases CHANGE REPLICATION FILTER REPLICATE_IGNORE_DB = (), REPLICATE_DO_DB = (sales,inventory); SET GLOBAL replicate_ignore_db = ''; SET GLOBAL replicate_do_db = 'sales,inventory'; Excluding can Cause Problems with Joins Documentation on Slave Options: 26

27 Slave Filtering Rules Table Level Ignore Specific Tables CHANGE REPLICATION FILTER REPLICATE_IGNORE_TABLE = (employees.salary); Include Specific Tables CHANGE REPLICATION FILTER REPLICATE_IGNORE_DB = (employees), REPLICATE_DO_TABLE = (employees.names, employees.contacts); Wildcards for Multiple Tables CHANGE REPLICATION FILTER REPLICATE_IGNORE_DB = (sales), REPLICATE_DO_TABLE = (sales.europe_%), REPLICATE_IGNORE_TABLE = (sales.europe_uk_%); Documentation on Slave Options: 27

28 MariaDB Replication - Asynchronous Master Doesn t Wait for Slaves IO Thread may be Slow to Receive binlog Packets Network Congestion or Disconnects SQL Thread may be Slow in Processing Relay Log Events Load on Slave or Network Problems 28

29 Semi-Synchronous Implemented with an optional Plug-In A COMMIT on Master can Wait for a Slave to Acknowledge it s Received Transaction Master Waits for Slave to Write Transaction to Relay Log, Not to Execute Transaction Slave SQL Thread may still Lag INSTALL PLUGIN rpl_semi_sync_master SONAME 'semisync_master.so'; INSTALL PLUGIN rpl_semi_sync_slave SONAME 'semisync_slave.so'; One Slave Response needed for Master to Continue (Semi-Synchronous, Not Synchronous) Can Affect Significantly Performance of Master Documentation on Semi-Synchronous Replication: 29

30 Lagging Slave When Slave SQL Thread is Slow or Disabled due to Errors, Slave is said to Lag behind Master CHANGE MASTER TO MASTER_DELAY = seconds; Slave SQL Thread must Execute Serially Queries that were Executed in Parallel on the Master Slave Multiple Execution Threads Multi-Threaded Slave separates events based on Database Names Updates are Applied in Parallel, Not Sequence Time-Delayed Replication - Set Provides a Buffer to Stop Replication of Mistakes Documentation on Time-Delayed Replication: 30

31 Troubleshooting Problems with Replication Check Slave Error Log for Errors affecting Replication Look for Disconnects from Network Problems Binary or Relay Log Event Corruption will cause Slave SQL Thread to Stop Different Query Error Codes on Slave indicate it s Not Synchronized with Master Tools like Maatkit can help with Replication Troubleshooting and Recovery May Need to Rebuild Slave from a fresh Snapshot (Back-up of Master or Another Slave) 31

32 MariaDB Replication Complex Scenarios Introduction Complex Scenarios Enterprise Cluster Back-Ups with Cluster MaxScale Installation DRBD Monitoring Overview Semi-Synch Plugin Cluster Configuration MariaDB MaxScale Disk Based Solutions Shared Disk Clustering MariaDB Replication Replication Manager Cluster Schema Changes MaxScale with Replication DRBD Configuration Conclusion

33 Non-Typical Replication Methods MariaDB Replication can use Non- Traditional Topology Circular Replication Multi-Source Replication 33

34 Circular Replication All Client can Write to any Server A Topology that can be Error Prone server-id 100 server-id 200 server-id

35 Multi-Source Replication Two Masters server-id 101 server-id 102 t1 t2 One Slave server-id 103 t1 or t2 35

36 MariaDB Replication Semi-Sync Plugin Introduction Complex Scenarios Enterprise Cluster Back-Ups with Cluster MaxScale Installation DRBD Monitoring Overview Semi-Synch Plugin Cluster Configuration MariaDB MaxScale Disk Based Solutions Shared Disk Clustering MariaDB Replication Replication Manager Cluster Schema Changes MaxScale with Replication DRBD Configuration Conclusion

37 MariaDB Replication Asynchronous Master Doesn t Wait for Slaves IO Thread may be Slow to Receive binlog Packets Network Congestion or Disconnects SQL Thread may be Slow in Processing Relay Log Events Load on Slave or Network Problems 37

38 Semi-Synchronous Implemented with an Semi-Synchronous Replication Plugins one for Master and one for Slaves A COMMIT on Master can Wait for a Slave to Acknowledge it s Received Transaction Master Waits for Slave to Write Transaction to Relay Log, Not to Execute Transaction - Slave SQL Thread may still Lag One Slave Response needed for Master to Continue (Semi-Synchronous, Not Synchronous) Can Affect Significantly Performance of Master Documentation on Semi-Synchronous Replication: 38

39 Install Set up MariaDB Replication Install Semi-Synchronous Plugins on Master and Slave INSTALL PLUGIN rpl_semi_sync_master SONAME 'semisync_master.so'; INSTALL PLUGIN rpl_semi_sync_slave SONAME 'semisync_slave.so'; Execute the SHOW PLUGINS Statement 39

40 Enable Semi-Synchronous Slave Acknowledge Receipt after Transactions are Written to Relay Log and Flushed Master Switches to Asynchronous Replication if Slave doesn't Acknowledge Transaction before Timeout SET rpl_semi_sync_master_enabled = ON; SET rpl_semi_sync_slave_enabled variables = ON; SET rpl_semi_sync_master_timeout = 10000; Semi-Synchronous Replication is Resumed when at least One Slave Synchronizes 40

41 Register New Semi-Synchronous Slave Restart Slave I/O Thread of an Existing Asynchronous to Register it as a Semi- Synchronous Slave when it Connects to Master STOP SLAVE IO_THREAD; START SLAVE IO_THREAD; Otherwise Slave will continue to use Asynchronous Replication 41

42 Semi-Synchronous Replication Master Set rpl_semi_sync_master_enabled variable to ON or 1 to Enable Semi- Synchronous Replication Master Disabled by Default SET rpl_semi_sync_master_enabled = ON; 42

43 Timeout for Semi-Synchronous Replication When Time for Commit Acknowledgement is Exceeded, Master Reverts to Asynchronous Replication Default Time is milliseconds (i.e., 10 seconds) Timeout Value of 0 to Accepted Status Variable rpl_semi_sync_master_status Set Automatically to OFF SET rpl_semi_sync_master_timeout = ON; 43

44 Tracing Level for Semi-Synchronous Replication There are Four Tracing Levels for Semi- Synchronous Replication SET rpl_semi_sync_master_trace_level = 32; 1 General Level (e.g., Time Function Failures) 16 Detailed, Verbose Level 32 Net Wait Level (e.g., Information about Network Waits) 64 Function Level (e.g., Information about Function Entries and Exits) Default Value is 32 44

45 Waiting Regardless of Slave Count When Disabled, Master will Revert to Asynchronous Replication when Slave Count SET rpl_semi_sync_master_wait_no_slave = ON; (rpl_semi_sync_master_clients) is 0 When Enabled, Master will Wait for Timeout Period regardless of Slave Count 45

46 Waiting for Synchronization Set rpl_semi_sync_master_wait_point to AFTER_SYNC so Master will wait for Acknowledgement that Transaction Synchronized with Slave's binlog SET rpl_semi_sync_master_wait_point = AFTER_SYNC; All Clients see Same Data on Master at Same Time After Acknowledgement by a Slave and After Committed to Storage Engine on Master Failover is Lossless if Master Crashes because All Transactions Committed on Master were Replicated on a Slave 46

47 Waiting for Commit Set rpl_semi_sync_master_wait_point to AFTER_COMMIT, so Master waits for a Slave to Commit to Storage Engine Return Status is Received for Transaction After Server Commits to Storage Engine and Receives Slave Acknowledgement Other Clients may see Committed Transaction before Committing Client If Master Crashes, it's Possible that Clients will see Data Loss Relative to what's on Master SET rpl_semi_sync_master_wait_point = AFTER_COMMIT; 47

48 Slave Trace Level Set Tracing Level for Semi-Synchronous Replication on Slave as for Master (i.e., rpl_semi_sync_master_trace_level) SET rpl_semi_sync_slave_trace_level = 32; Default Value is 32 48

49 MariaDB Replication Manager Introduction Complex Scenarios Enterprise Cluster Back-Ups with Cluster MaxScale Installation DRBD Monitoring Overview Semi-Synch Plugin Cluster Configuration MariaDB MaxScale Disk Based Solutions Shared Disk Clustering MariaDB Replication Replication Manager Cluster Schema Changes MaxScale with Replication DRBD Configuration Conclusion

50 MariaDB Replication Manager High Availability Solution to Monitor and Administer MariaDB Replication and MariaDB Enterprise Clusters Topology Detection and Monitoring On-Demand Slave to Master Promotion (i.e., Switchover) Electing a New Master on Failure Detection (i.e., Failover) Documentation on MariaDB Replication Manager: 50

51 Features High Availability Support with Leader Election Semi-Sync Replication Support Provisioning Bootstrap Http Daemon Mode Alerts Configuration file Two Nodes Multi Master Switchover Support On Live Mode Failover SLA Tracking Log Facilities and Verbosity Docker Images Docker Deployment via OpenSVC in Google Cloud Docker Deployment via OpenSVC on Premise for Ubuntu and Mac OS X 51

52 Replication Manager Advantages Leader Performance Not Affected by Dysfunctional or Heterogeneous Nodes Leader Pick Performance is Not Impacted by Data Replication Read Scalability Doesn't Impact Write Scalability Network Inter-Connect Quality Fluctuation Manual Intervention Better on False Positive Failure Detection Minimum Cluster of Two Data Nodes is Better Benefits to having Different Storage Engines 52

53 Replication Manager Disadvantages Overloading the Leader can cause Data Loss during Failover READ on Replica is eventually Consistent ACID can be Preserved via Route to Leader Always 53

54 Switchover Process Preserving data consistency Verify Replication Settings Check (configurable) Replication on Slaves Check for Long Running Queries on Master Elect New Master Most Up-to-Date or Designated Candidate Put down the IP Address on Master by calling an Optional Script Reject Writes on Master by executing FLUSH TABLES WITH READ LOCK Reject Writes on Master by Setting READ_ONLY Flag Reject Writes on Master by Decreasing MAX_CONNECTIONS Kill Pending Connections on Master Watch for All Slaves to catch up to Current GTID Position Promote the Candidate Slave to be New Master Put up the IP Address on New Master by calling an Optional Script Switch Other Slaves and Old Master to be Slaves of New Master and Set them as Read-Only 54

55 Arbitrator & Proxy MRM is often Used as an Arbitrator and Proxy to Route Database Traffic to Master Use Layer 7 Proxy as MaxScale that can Transparently Follow a Newly Elected Topology With Monitor-Less Proxies, MRM can call Scripts to Set and Reload the New Configuration of Leader Route VRRP Active Passive HAProxy Sharing Configuration via a Network Disk with MRM Scripts is a Common [MySQL Monitor] type=monitor module=mysqlmon servers=%%env:servers_list%% user=root passwd=%%env:myrootpwd%% monitor_interval=500 detect_stale_master=true [Write Connection Router] type=service router=readconnroute router_options=master servers=%%env:servers_list%% user=root passwd=%%env:myrootpwd%% enable_root_user=true 55

56 Pacemaker Resource MariaDB Replication Manager can be called as a Pacemaker Resource Uses as an API component of a Group Communication Cluster 56

57 Leader Election Asynchronous Cluster Guarantee Continuity of Service at No Cost to Leader and in some conditions with "No Data Loss" Replication Manager will Track Failover Service Level Availability (SLA) Replication Manager enforces some Configurable Settings to Constrain the State in which a Failover Occurs 57

58 Service Level Availability & Failover SLA and Failover Scenario Classifications Replica Stream in Synch Replica Stream Not in Synch but State Allows Failover Replica Stream Not in Synch but State does Not Allow Failover 58

59 In-Synch State Failover done without Loss of Data when Replication was in Synch Replication Manager waits for All Replicated Events to be Applied to the Elected Replica Before Re-Opening Traffic Various Settings Recommended to Achieve Generally this State See Next Slides for More Recommended Settings 59

60 Replication at Full Speed Replication can typically stay in Synch with Master with New Features Group Commit Optimistic In-Order Parallel Replication Semi-Synchronous Replication Optimistic Parallel Replication Settings: slave_parallel_mode = optimistic slave_domain_parallel_threads = %%ENV:CORES%% slave_parallel_threads = %%ENV:CORES%% expire_logs_days = 5 sync_binlog = 1 log_slave_updates = ON 60

61 Usage of Semi-Synchronous Replication Delays Transaction Commit until a Replica get Transactional Event Synch Status Lost only when Replication Delay is attained Synch Status is Checked to Compute the last SLA Metrics When Auto-Failover may occur Without Losing Data When can Reintroduce the Dead Leader without Re-Provisioning plugin_load = "semisync_master.so;semisync_slave.so" rpl_semi_sync_master = ON rpl_semi_sync_slave = ON loose_rpl_semi_sync_master_enabled = ON loose_rpl_semi_sync_slave_enabled = ON rpl_semi_sync_master_timeout = 10 Records Warning in Error Log on Slaves if SemiSyncMaster Status is Off 61

62 Not In-Synch & Failable State When Replication is Not Delayed for Long, Replication Manager can still Auto-Failover Data Loss is Possible because High Availability Second SLA Tracks Time can Failover under Predefined Conditions in Replication Manager All Slave Delays Not Exceeded 62

63 Not In-Synch & Failable State Data Loss Probability Increased with Single Slave Topology when: Slave Delayed by Long Running Transaction Stopped for Maintenance Catching on Replication Events Heavy Single Threaded Write Process Network Performance Can't catch up with Leader Performance Minimize by Using at Least Three Nodes in a Cluster Removes some Scenarios (e.g., Losing a Slave) 63

64 Not In-Synch & Unfailable State First SLA Tracks the Presence of a Valid Topology when A Leader is Reachable but Number of Possible Failovers Exceeded Time before Next Failover not yet reached No Slave Available to Failover Opportunity to handle Long Running Write Transactions and Split into Smaller Components Minimize Time in this State as Failover not Possible without Significant Impact Replication Manager can Force Interactive Mode 64

65 Data consistency Inside Switchover Replication Manager Prevents Additional Writes to set READ_ONLY on Old Leader if Routers are Sending Write Transactions Could Accumulate until Timeout despite being Killed by Replication Manager To Prevent Delayed Writes, max_connections for Server Decreased to 1 and uses Last Connection without Crashing Use Extra Port provided with MariaDB Thread Pool Feature to Avoid a Node Unable to Connect thread_handling = pool-of-threads extra_port = 3307 extra_max_connections = 10 65

66 Protecting Data Consistency Disable SUPER Privilege for Write Users MaxScale User when Read-Write Split Module is set to Check for Replication Lag [Splitter Service] type=service router=readwritesplit max_slave_replication_lag=30 CREATE USER IDENTIFIED BY 'maxpwd'; GRANT SELECT ON mysql.user TO GRANT SELECT ON mysql.db TO GRANT SELECT ON mysql.tables_priv TO GRANT SHOW DATABASES, REPLICATION CLIENT ON *.* TO GRANT ALL ON maxscale_schema.* TO 66

67 Procedural Command-Line Example Switchover Mode replication-manager switchover / --hosts=db1,db2,db3 --user=root / --rpluser=replicator --interactive Master Host db1 and Slaves db2 and db3 67

68 Procedural Command-Line Example Non-Interactive Failover Mode replication-manager failover / --hosts=db1:3306,db2:3306,db2:3306 / --user=root:pass --rpluser=repl:pass / --pre-failover-script="/usr/local/bin/vipdown.sh" / --post-failover-script="/usr/local/bin/vipup.sh" / --verbose --maxdelay=15 Uses root User for Management and repl User for Replication Switchover Failover Scripts Given Maximum Slave Delay of 15 seconds before Performing Switchover 68

69 Monitoring with Console Mode replication-manager monitor / --hosts=db1:3306,db2:3306,db2:3306 / --user=root:pass --rpluser=repl:pas 69

70 Console Commands Several Commands in Console Mode Ctrl-D Print debug information Ctrl-F Manual Failover Ctrl-I Toggle automatic/manual failover mode Ctrl-R Set slaves read-only Ctrl-S Switchover Ctrl-Q Quit Ctrl-W Set slaves read-write 70

71 HTTP Server Start Replication Manager in Background to Monitor a Cluster with HTTP Server Controlling the Daemon replication-manager / monitor --hosts=db1:3306,db2:3306,db2:3306 / --user=root:pass --rpluser=repl:pass / --daemon --http-server Accessible on by Default 71

72 Replication Manager Dashboard Don't Use in Production: Doesn't Have Protected Access Unless you Devise a Way to Restrict Access 72

73 MariaDB Enterprise Cluster Introduction Complex Scenarios Enterprise Cluster Back-Ups with Cluster MaxScale Installation DRBD Monitoring Overview Semi-Synch Plugin Cluster Configuration MariaDB MaxScale Disk Based Solutions Shared Disk Clustering MariaDB Replication Replication Manager Cluster Schema Changes MaxScale with Replication DRBD Configuration Conclusion

74 Advantages of MariaDB Enterprise Cluster Parallel Slave Applying Practically No Slave Lag Instant Trivial Failover Automatic Node Provisioning Works Well in WAN 74

75 Nuances Uses Only InnoDB Primary Keys are Necessary Commit Latency Transaction Size Limited to 2GB Unlimited in Future Versions DEADLOCK on COMMIT AUTO_INCREMENT Handled Differently No Cluster-Wide Read Locks 75

76 MariaDB Replication Server Centric Style Asynchronous Replication No Conflict Detection Multi-Master Replication Master Server Slave Server If Node C Crashes, Does Cluster Survive? If Node B Crashes and Clients Switch to C, How does Node B Rejoin? Which Node has Data X? How do you Back-Up Cluster? Node A Node C Node B 76

77 Galera Approach Data Centric Style Data Doesn t Belong to a Node Nodes Belong to Data Data is Synchronized among Two or More Servers DataSet Server 1 Server 2 Server 3 Server N 77

78 Galera Approach Galera Nodes are Anonymous All are Equal Galera Cluster is One Large Distributed Master A DataSet Needs an Identifier DataSet Identier is a Cluster Identifier 00295a79-9c48-11e2-bdf0-9a916cbb9294 DataSet Cluster 78

79 Global Transaction Identifier (GTID) DataSet plus Sequence of Atomic Changes equals GTID 00295a79-9c48-11e2-bdf0-9a916cbb9294:64201 DataSet 00295a79-9c48-11e2-bdf0-9a916cbb9294:

80 Global Transaction Identifier (GTID) Initial DataSet 00295a79-9c48-11e2-bdf0-9a916cbb9294:0 First Change and Transaction 00295a79-9c48-11e2-bdf0-9a916cbb9294:1 Undefined GTID :-1 80

81 Global Transaction Identifier (GTID) MySQL 5.6 GTID e-7c1e-11e2-a6e ef5:12345 (server identifier : transaction processed by server) MariaDB 10 GTID (domain - server identifier - data change in asynchronous cluster) Galera GTID 00295a79-9c48-11e2-bdf0-9a916cbb9294:64201 (data & cluster Identifier : data change in cluster) 81

82 Global Transaction Identifier (GTID) Visible in MySQL e-7c1e-11e2-a6e ef5: e-7c1e-11e2-a6e ef5: e-7c1e-11e2-a6e ef5:12347 New Master Promoted f4e3bf7a-a91f-11e2-4e02-3f8dbcffaed8:1 f4e3bf7a-a91f-11e2-4e02-3f8dbcffaed8:2 f4e3bf7a-a91f-11e2-4e02-3f8dbcffaed8:3 82

83 Global Transaction Identifier (GTID) Visible in MariaDB New Master Promoted

84 Global Transaction Identifier (GTID) Visible in Galera 00295a79-9c48-11e2-bdf0-9a916cbb9294: a79-9c48-11e2-bdf0-9a916cbb9294: a79-9c48-11e2-bdf0-9a916cbb9294:64203 New Master Promoted 00295a79-9c48-11e2-bdf0-9a916cbb9294: a79-9c48-11e2-bdf0-9a916cbb9294: a79-9c48-11e2-bdf0-9a916cbb9294:

85 Master or Slave Not a Node Role or Function A Relation Between a Node and a Client client1 client2 slave1 slave2 master1 slave2 Cluster slave1 master2 master for client1 slave1 slave2 slave1 slave2 master for client1 85

86 Cluster Address wsrep_cluster_address Documentation on Galera Cluster Addresses: 86

87 Cluster Address wsrep_cluster_address gcomm:// handshake

88 Cluster Address

89 Cluster Address

90 Cluster Address wsrep_cluster_address = gcomm://node1,node2 Try to Connect to Members (node1, node2) Can only Join a Running Cluster

91 Node Synchronization (State Transfer)

92 Node Synchronization (State Transfer) joined synced synced desync undefined synced 92

93 Node Synchronization (State Transfer) New Node Cluster Old Node UNDEFINED Verify Node Synched Donor Found Wait Be a Donor SYNCHED JOINER DONOR Transfer Missing Data (Private Channel) JOINED JOINED catch-up catch-up SYNCHED SYNCHED 93

94 Avoiding Split Brain Distinguishing Server Crash from Network Failure in Shared Nothing Architecture Decision Algorithm Used to Avoid Spilt Brain Absolute Majority Needed in Galera Uneven Number of Nodes Safer 94

95 Primary Component Primary 95

96 Primary Component Continues Working Primary Non-Primary 96

97 Primary Component Continues Working Tries to Reconnect Primary Non-Primary 97

98 Primary Component Primary 98

99 Split Brain Non-Primary Split Brain Possible with Even Number of Nodes Non-Primary 99

100 Synchronous Penalties Galera Copies Data Buffer to All Cluster Members on COMMIT from Client (~1 RTT added latency) Connection throughput equals 1/RTT trx/sec Total throughput equals 1/RTT trx/sec #connections A Given Row Can't be Modified More Than 1/RTT times a second Round Trip Time (RTT) is Length of Time for a Signal to be sent and Receipt of Acknowledgement 100

101 Galera Synchronous Penalty in WAN (EC2) 101

102 Galera Synchronous Penalty in WAN (EC2) 102

103 Slave Lag in Galera Client Master Node START TRANSACTION PROCESS Slave Node COMMIT Replicate Write Set Certify Acknowledge OK COMMIT APPLY SELECT (stale data) COMMIT 103

104 MDBE Cluster Configuration Introduction Complex Scenarios Enterprise Cluster Back-Ups with Cluster MaxScale Installation DRBD Monitoring Overview Semi-Synch Plugin Cluster Configuration MariaDB MaxScale Disk Based Solutions Shared Disk Clustering MariaDB Replication Replication Manager Cluster Schema Changes MaxScale with Replication DRBD Configuration Conclusion

105 Installing from a Repository MariaDB Repository Configuration Tool ( Available for all Major Distributions Downloading MariaDB & Galera: 105

106 Installing MariaDB Galera Cluster Install Extra Packages for Enterprise Linux $ yum install epel-release Install Galera Enabled MariaDB Server $ yum install MariaDB-server Install Network Copying Tool $ yum install socat 106

107 Additional Utilities XtraBackup Useful for SST $ yum install xtrabackup Percona Toolkit $ yum install percona-toolkit XtraBackup Repo: 107

108 Linux Configuration SELinux Disable Enforce Set Server to SELinux Mode Set Start Timeout for systemd (CentOS 7+) cat /selinux/enforce echo 0 > /selinux/enforce SELINUX=permissive Excerpt from /etc/selinux/config [Service] TimeoutStartSec="30 min" systemctl daemon-reload Executed from Command-Line Start Timeout Settings: SELinux Configuration: 108

109 Linux Firewall Ports Used Port 3306 Client Connections to Nodes Port 4567 Replication Protocol Port 4568 Incremental State Transfers (IST) Port 4444 Snapshot State Transfer (SST; socat) iptables -L /etc/init.d/iptables stop chkconfig iptables off Executed from Command-Line IST & SST Methods may have Additional Connectivity Requirements mysqldump requires mysql client connections (port 3306) between Nodes Rsync and XtraBackup use Netcat (nc) between Nodes 109

110 MariaDB Configuration Only InnoDB Tables will be Replicated --enforce-storage-engine=innodb [mysqld] max_connections=1024 binlog_format=row innodb_buffer_pool_size=200m innodb_log_file_size=100m innodb_flush_log_at_trx_commit=2 default-storage-engine=innodb innodb_autoinc_lock_mode=2 Documentation on Enforcing Storage Engine Setting: Documentation on InnoDB Flush Log at Transaction Commit Setting: 110

111 MariaDB Configuration Galera Settings wsrep_provider=/usr/lib64/galera/libgalera_smm.so wsrep_cluster_name="our_cluster" wsrep_node_name=node1 wsrep_cluster_address="gcomm:// , , " wsrep_node_address= wsrep_slave_threads=1 wsrep_sst_method=rsync wsrep_sst_auth=galera:galera wsrep_on=on 111

112 Start First Node Start galera1 as First Node Use SHOW STATUS and SHOW VARIABLES to List Galera Options systemctl start mariadb galera_new_cluster Executed from Command-Line on First Node SHOW GLOBAL STATUS LIKE 'wsrep_cluster_size'; SHOW GLOBAL VARIABLES LIKE 'wsrep%'; Executed from mysql Client on First Node mysql -u root -p -e / "SHOW VARIABLES LIKE 'wsrep_provider_options" Executed from Command-Line on First Node 112

113 Configure Galera Variables with wsrep_ Prefix Configure MariaDB Replication Behavior The wsrep_provider_options a Variable is a Collection of Variables that Control Galera Replication Plugin Behavior Galera System Variables: Galera Status Variables: 113

114 Default Enterprise Cluster Related Values base_host = base_port = 4567 cert.log_conflicts = no evs.causal_keepalive_period = PT1S evs.debug_log_mask = 0x1 evs.inactive_check_period = PT0.5S evs.inactive_timeout = PT15S evs.info_log_mask = 0 evs.install_timeout = PT15S evs.join_retrans_period = PT1S evs.keepalive_period = PT1S evs.max_install_timeouts = 1 evs.send_window = 4 evs.stats_report_period = PT1M evs.suspect_timeout = PT5S evs.use_aggregate = true evs.user_send_window = 2 evs.version = 0 evs.view_forget_timeout = PT5M Galera System Variables: Galera Status Variables: gcache.dir = /var/lib/mysql/ gcache.keep_pages_size = 0 gcache.mem_size = 0 gcache.name = /var/lib/mysql// galera.cache gcache.page_size = 128M gcache.size = 128M gcs.fc_debug = 0 gcs.fc_factor = 1 gcs.fc_limit = 16 gcs.fc_master_slave = NO gcs.max_packet_size = gcs.max_throttle = 0.25 gcs.recv_q_hard_limit = gcs.recv_q_soft_limit = 0.25 gcs.sync_donor = NO gmcast.listen_addr = tcp:// :4567 gmcast.mcast_addr = gmcast.mcast_ttl = 1 gmcast.peer_timeout = PT3S gmcast.time_wait = PT5S gmcast.version = 0 gmcast.segment = 0 ist.recv_addr = pc.checksum = true pc.ignore_quorum = false pc.ignore_sb = false pc.linger = PT20S pc.npvo = false pc.version = 0 pc.weight = 1 protonet.backend = asio protonet.version = 0 replicator.causal_read_timeout = PT30S replicator.commit_order = 3 114

115 Start Other Nodes Second and Third Nodes must be Provided an Address for Connecting to the Cluster Use wsrep_cluster_address with IP Address of Node and at Least One Other Node wsrep_cluster_address = gcomm:// , , Excerpt from Configuration File of Each Node systemctl start mariadb Executed from Command-Line on Second and Third Node 115

116 Cluster Address Write All Nodes Planned for Cluster in Address String wsrep_cluster_address=gcomm:// , ,... Don't Leave wsrep_cluster_address=gcomm: // in my.cnf The wsrep_urls Parameter for mysqld_safe is Deprecated Galea Cluster Address: 116

117 MariaDB Configuration [mysqld] logbin binlog_format=row wsrep_on=on wsrep_provider=/usr/lib64/galera/libgalera_smm.so wsrep_cluster_name="our_cluster" wsrep_node_name=node2 wsrep_cluster_address="gcomm:// , , " wsrep_node_address= wsrep_slave_threads=1 wsrep_sst_method=rsync wsrep_sst_receive_address= :3306 wsrep_sst_auth=galera:rover

118 Start Other Nodes Join the Second and Third Nodes to Cluster Node 2 systemctl start mariadb Executed from Command-Line on Second Node SHOW GLOBAL STATUS LIKE 'wsrep%'; Executed from mysql Client on Second Node Node 3 systemctl start mariadb Executed from Command-Line on Third Node SHOW GLOBAL STATUS LIKE 'wsrep%'; Executed from mysql Client on Third Node 118

119 Initial Cluster Startup All Nodes Should Be Running Now and Consistent Test Replication 1 CREATE TABLE test.table1 (col1 INT UNSIGNED KEY); INSERT INTO test.table1 VALUES (1),(2),(3); Executed from mysql Client on Node 1 2 SELECT * FROM test.table1; INSERT INTO table1 VALUES (4); Executed from mysql Client on Node 2 3 SELECT * FROM table1; Executed from mysql Client on Node 3 119

120 Sysbench Benchmarking Tool Install RPM Packages To Build a Database for Testing: sysbench [options] prepare To Run a Test on Database: sysbench [options] run $ wget $ rpm -ivh Create a Database $ sysbench --test=sysbench_tests/db/common.lua --mysql-host=node1 \ --mysql-user=test --mysql-db=sbtest --oltp-table-size= prepare $ sysbench --test=sysbench_tests/db/oltp.lua --mysql-host=node1 \ --mysql- user=test --mysql-db=sbtest --oltp-table-size= \ --report-interval=1 --max-requests=0 --tx-rate=10 run grep tps Set Up SysBench: Project Page: 120

121 MDBE Cluster Schema Changes Introduction Complex Scenarios Enterprise Cluster Back-Ups with Cluster MaxScale Installation DRBD Monitoring Overview Semi-Synch Plugin Cluster Configuration MariaDB MaxScale Disk Based Solutions Shared Disk Clustering MariaDB Replication Replication Manager Cluster Schema Changes MaxScale with Replication DRBD Configuration Conclusion

122 Schema Upgrades DDL is Non-Transactional Bad for Replication Galera has Two Methods for DDL Total Order Isolation (TOI) Rolling Schema Upgrade (RSU) Use wsrep_osu_method to choose Option pt-online-schema-change provides Lockless Schema Upgrade Use TOI Mode Careful of Foreign Keys 122

123 Total Order Isolation DDL is Replicated Up-Front Each Node gets the DDL Statement and Must Process DDL at Same Slot in Transaction Stream Galera will Isolate the Affected Table or Database for Duration of DDL Processing and Lock the Cluster Documentation on Isolation Levels: 123

124 Rolling Schema Upgrade DDL is Not Replicated Galera will Remove Node from Replication for Duration of DDL Processing When done with DDL, Node will get Missed Transactions (e.g., IST) DBA should Roll RSU Operation over All Nodes Requires Backward Compatible Schema Changes Use Only under Certain Conditions Planned SQL is Not Conflicting SQL will Not Generate Inconsistency 124

125 Schema Upgrade Strategy Best Practices Plan Upgrades Try to make Backwards Compatible Rehearse Upgrades Determine DDL Execution Time Use RSU if Possible ALTER TABLE to Create New AUTO_INCREMENT Column will Cause Problems Every Node has Different AUTO_INCREMENT and Offset Settings 125

126 Load Testing a Schema Upgrade Start Moderate sysbench Load sysbench --test=/usr/share/doc/sysbench/tests/db/oltp.lua \ --mysql-user=root --mysql-password=fido1123 \ --mysql-db=sbtest --oltp-table-size=25000 \ --report-interval=5 --max-requests=0 --tx-rate=10 run Issue Some DDL under TOI 1 2 ALTER TABLE sbtest ADD COLUMN (m int UNSIGNED KEY); Executed from mysql Client on Node 1 ALTER TABLE sbtest ADD COLUMN (n int UNSIGNED); 3 CREATE TABLE m (i int); Executed from mysql Client on Node 1 4 CREATE TABLE n (i int); Executed from mysql Client on Node 2 Executed from mysql Client on Node 2 126

127 Test Schema Upgrade Issue some DDL under RSU 1 2 SET GLOBAL wsrep_osu_method=rsu; ALTER TABLE sbtest DROP COLUMN m; Executed from mysql Client on Node 1 SHOW CREATE TABLE sbtest; SET GLOBAL wsrep_osu_method=rsu; ALTER TABLE sbtest DROP COLUMN n; Executed from mysql Client on Node DROP TABLE m; Executed from mysql Client on Node 1 DROP TABLE n; Executed from mysql Client on Node 2 127

128 Back-Ups with MDBE Cluster Introduction Complex Scenarios Enterprise Cluster Back-Ups with Cluster MaxScale Installation DRBD Monitoring Overview Semi-Synch Plugin Cluster Configuration MariaDB MaxScale Disk Based Solutions Shared Disk Clustering MariaDB Replication Replication Manager Cluster Schema Changes MaxScale with Replication DRBD Configuration Conclusion

129 Galera for Back-Ups All Galera Nodes are Continuously Up-to-Date Best Practices Dedicate a Reference Node for Back-ups Assign Global Transaction Identifier with Back-up Possible Methods Desychronize a Node for Back-up XtraBackup 129

130 Back-Ups with Global Transaction Identifier Global Transaction Identifier (GTID) marks Position in Cluster Transaction Stream Backup with Known GTID makes Utilizing IST Possible when Joining New Nodes Recovering a Node Provisioning New Nodes Use XtraBackup's wrapper innobackupex with -- galera-info 130

131 Back-Up by Desynchronizing a Node Load Balancing Isolate Back-Up Node Server 1 Server 2 Server 3 Galera Replication 131

132 Back-Up by Desynchronizing a Node Desynchronize a Node from Group (Enable wsrep_desync) Load Balancing Server 1 Server 2 Server 3 Desynchronize Galera Replication 132

133 Back-Up by Desynchronizing a Node Load Balancing FLUSH TABLES WITH READ LOCK; Block Changes Server 1 Server 2 Server 3 Galera Replication 133

134 Back-Up by Desynchronizing a Node Read GTID from Status and Assign to Back- Up (wsrep_cluster_uuid, wsrep_last_committed) Load Balancing Back-Up Server 1 Server 2 Server 3 Galera Replication 134

135 Performing Back-Up After Desynchronizing Node Make Logical Backup mysqldump mydumper Physical Back-Up Copy with cp LVM Snapshot 135

136 Back-Up by Desynchronizing a Node Load Balancing UNLOCK TABLES; Replicate Changes Server 1 Server 2 Server 3 Galera Replication 136

137 Back-Up by Desynchronizing a Node Disable wsrep_desync Load Balancing Server 1 Server 2 Server 3 Galera Replication 137

138 XtraBackup Hot Back-Up Method Can be Used Any Time Simple and Efficient Use --galera-info Option to get Global Transaction Identifier Logged into Separate Galera Information File 138

139 MariaDB MaxScale Overview & Terms Introduction Complex Scenarios Enterprise Cluster Back-Ups with Cluster MaxScale Installation DRBD Monitoring Overview Semi-Synch Plugin Cluster Configuration MariaDB MaxScale Disk Based Solutions Shared Disk Clustering MariaDB Replication Replication Manager Cluster Schema Changes MaxScale with Replication DRBD Configuration Conclusion

140 MaxScale Objectives & Features Highly Scalable Lightweight with Small Footprint Minimum Possible Latency Highly Available Extendible Authentication Required Transparent to Applications 140

141 MaxScale Core Event Driven Network I/O Processor Polling, Event Driven Mechanism Responsible for Dispatching Events to Various Modules in MaxScale Application Events in MaxScale equal Network Requests: Handling an Incoming Connection on a Listener Socket Incoming Data for a Client Connection Data Arriving on a Connection from a Backend Database Server Socket Error on a Client or Database Connection Socket Closure Availability of Connections to Receive More Data 141

142 MaxScale as an Intermediary Application-to-Database Insulates Client Applications from Complexities of Backend Database Cluster MariaDB Master Application MaxScale MariaDB Slave MariaDB Slave MariaDB Slave 142

143 MaxScale Security using Database Firewall Protect against SQL Injection Prevent Unauthorized Data Access Prevent Data Damage SQL Injection 1 Query Select from customer Where id = 5:SELECT * FROM CUSTOMERS; Client MaxScale 3 Error Query failed: 1141 Error: Required WHERE/HAVING clause is missing Firewall Filter rule safe_select deny no_where_clause on_queries select rule sage_cust_select deny regex'-*from. *customers.*' user %app-user@%match all rules safe_cust_select safe_select 2 Master Slave Slave 143

144 Filtering Options Block or Allow Queries based on Matching Patterns Date & Time WHERE Clause Query Type Column Match Wildcard or Regular Expression 144

145 Load Balancing Read Scaling MariaDB Master-Slave Replication Global Transaction ID (GTID) unique across Multiple Independent Replication Streams Multi-Source Replication Optimistic Parallel Replication Slave Execution of Triggers MariaDB Slave Client Client Client MaxScale MariaDB Slave MariaDB Master 145

146 Load Balancing Write Scaling MariaDB Enterprise Cluster Client Client Client Multi-Master Replication for Write Scalability MaxScale MariaDB MariaDB MariaDB Galera Cluster 146

147 High Performance Scaling Binlog with MaxScale Binlog Server Horizontal Scaling of Slaves Without Master Overload Crash Safe Disaster Recovery Master Promotion without effecting other Slaves MaxScale MariaDB Master binlog Cache MariaDB Slave MariaDB Slave MariaDB Slave 147

148 Automatic Failover Automatic Detection of Master Failure MaxScale Monitor Launches Script upon Master Failure Promotes a Slave as New Master Instructs Other Slave of New Master binlog Cache Master MaxScale Master Fails Monitor Detects Event: master_down Execute Failover Script Promote a Slave to Master CHANGE MASTER on Slaves Slave Slave Slave 148

149 Scalability for Multi-Tenant Growth Scaling Mult-Tenant Database using Schema Shard Router Each Tenant has its own Schema Shard Scale a Database as Users and Data Volume Grow No Impact on Existing Users MaxScale Shard 1 Shard 3 Shard 2 Shard n 149

150 MariaDB to Big Data Replication MaxScale Binlog-Avro Translator Replicate binlog Events from MariaDB to Kafka Producer Kafka consumers to receive Data into Hadoop, a Custom Data Warehouse, or Custom Application MariaDB Slave MariaDB Slave MaxScale MariaDB Master Amazon EMR Cassandra Kafka Amazon Redshift Google bigquery Hadoop 150

151 Other Plugins & Features Weighted Routing Top Filter Log Top N Query RabbitMQ Filter Canonical Query Logging RegEx Filter Hint Filter Hint-Based Routing, Slave Lag-Based Routing Named Server Filter Query Regex Mapped to Server-Based Routing 151

152 MaxScale with Replication Introduction Complex Scenarios Enterprise Cluster Back-Ups with Cluster MaxScale Installation DRBD Monitoring Overview Semi-Synch Plugin Cluster Configuration MariaDB MaxScale Disk Based Solutions Shared Disk Clustering MariaDB Replication Replication Manager Cluster Schema Changes MaxScale with Replication DRBD Configuration Conclusion

153 Purpose of MariaDB Replication Load Balancing (Scaling SELECT Queries) Move Slow, Heavy Queries to Slave Take Slave Off-Line to Make Back-ups Easier to Accomplish with MaxScale Multiple Data Centers Need Fast Reads Gain Redundancy (High Availability) Fail Over Promote Quickly a Slave to Master Fail Over Isn t Automatic Automate with MaxScale Minimal Downtime for Upgrades or Schema Changes Apply Changes to a Slave Promote Slave to Master and Redirect Traffic Apply Changes to Master and Switch 153

154 Replication Architecture Slave 1 Master mysqld SQL Thread Data Storage mysqld Data Storage IO Thread Relay Log INSERT UPDATE DELETE binlog Dump Thread Binary Log IO Thread Slave 2 Client Threads mysqld SQL Thread Data Storage IO Thread CREATE ALTER DROP MaxScale can be Placed between Master and Slaves, and Clients and all Servers IO Thread Relay Log 154

155 Replication Threads Master binlog Dump Thread Pushes binlog Events to Slave Visible in SHOW PROCESSLIST as "Binlog Dump" Slave IO Thread Visible in SHOW SLAVE STATUS Requests and Receives binlog events from the Master Writes them to the Local Relay Log Slave SQL Thread Visible in SHOW SLAVE STATUS Reads the Relay Log and Executes Queries on Local Data Checks the Query Result Codes Match those Recorded by Master Slave Multiple Execution Threads Multi-Threaded Slave separates events based on Database Names Updates are Applied in Parallel, Not Sequence 155

156 Master Configuration Enable Binary Log Choose Binary Log Format Set serverid in Configuration File to Unique Value Create Replication User Account on Master GRANT REPLICATION SLAVE ON *.* TO IDENTIFIED BY 'rover123'; Make a Consistent Snapshot of Data on Master mysqldump -p -u admin --master-data --flush-logs \ --all-databases > full-dump.sql Documentation on Binary Log Format: 156

157 Slave Configuration Set server-id in Configuration File to Unique Value Add --read-only in Configuration File to Prevent Writes Set Optionally Replication Rules Load Data from Master mysql -p -u admin < full-dump.sql Provide Slave with Settings for Master - (i.e., without mysqldump --master-data) CHANGE MASTER TO MASTER_HOST=' ', MASTER_PORT=3306, MASTER_USER='maria_replicator', MASTER_PASSWORD='rover123'; Execute START SLAVE on Slave Documentation on Slave Options: Documentation on CHANGE MASTER TO: 157

158 Monitoring Replication Check Regularly Status on Master Includes Binary Log Number and Position Check More Often Status of Replication on Slave SHOW MASTER STATUS; SHOW SLAVE STATUS; Slave_IO_State: Waiting for master to send event Slave_IO_Running: Yes Slave_SQL_Running: Yes Last_Errno: 0 Last_Error: Seconds_Behind_Master: Documentation on SHOW MASTER STATUS: Documentation on SHOW SLAVE STATUS: 158

159 MariaDB Replication Manager High Availability Solution Monitor and Administer MariaDB Replication and MariaDB Enterprise Clusters On-Demand Slave to Master Promotion (i.e., Switchover) Electing a New Master on Failure Detection (i.e., Failover) Latest Releases of the Replicaiton Manager: 159

160 Replication Manager Switchover Process Verify Replication Settings Check Replication on Slaves Check for Long Running Queries on Master Reject Writes on Master Execute FLUSH TABLES WITH READ LOCK Set READ_ONLY Flag Decrease MAX_CONNECTIONS Kill Pending Connections on Master Disable the IP Address on Master via Script Wait for Slaves to reach Current GTID Position Promote a Slave to be New Master Make New Master Writable if READ ONLY Inform Other Slaves and Previous Master of New Master Enable IP Address on Previous Master with a Script 160

161 MariaDB MaxScale Installation Introduction Complex Scenarios Enterprise Cluster Back-Ups with Cluster MaxScale Installation DRBD Monitoring Overview Semi-Synch Plugin Cluster Configuration MariaDB MaxScale Disk Based Solutions Shared Disk Clustering MariaDB Replication Replication Manager Cluster Schema Changes MaxScale with Replication DRBD Configuration Conclusion

162 Download Methods MariaDB Enterprise Repository ( my_portal/download) Sign In or Create Account Download Package for Operating System Downloaded Package Directly ( or mariadb.com/downloads/maxscale) Download Package for Operating System (i.e., rpm or.deb) Download MasScale: Download MasScale: Repository Configuration Tool: 162

163 Install MaxScale Package Install Package after Downloaded with yum sudo yum install /path/maxscale-package.rpm rpm Install Method sudo dpkg -i /path/maxscale-package.deb sudo apt-get install -f deb Install Method 163

164 Create a MariaDB User MaxScale needs a User Account on MariaDB Set Host to the Address of the Server where MaxScale is Installed CREATE USER maxscaler@localhost IDENTIFIED BY 'rover123'; GRANT SELECT ON mysql.user TO 'maxscaler'@localhost; GRANT SELECT ON mysql.tables_priv TO 'maxscaler'@localhost; GRANT SELECT ON mysql.db TO 'maxscaler'@localhost GRANT SHOW DATABASES, REPLICATION CLIENT ON *.* TO 'maxscaler'@localhost; User Requirements for MaxScale: 164

165 Configuration File MaxScale Configuration File (/etc/maxscale.cnf) Consists of Sections Services Servers Listeners Monitors Global Settings Copy Template Configuration File sudo cp /etc/maxscale.cnf.template /etc/maxscale.cnf 165

166 Edit Configuration File Change the User Name (user) and Password (passwd) Values for Each Section to MariaDB User Created [maxscale] threads=1 [server1] type=server address= port=3306 protocol=mysqlbackend [MySQL Monitor] type=monitor module=mysqlmon servers=server1 user=maxscaler passwd=rover Basic maxscale.cnf File 166

167 Starting MaxScale Start MaxScale with either systemctl or service systemctl start maxscale.service Executed from Command-Line service maxscale start Executed from Command-Line 167

168 Check MaxScale Log into MaxScale to Ensure it's Running sudo maxadmin MaxScale> SHOW SERVERS Server 0x8abf50 (server1) Server: Status: Master, Running Protocol: MySQLBackend Port: 3306 Server Version: MariaDB-log Node Id: 1 Master Id: -1 Slave Ids: Repl Depth: 0 Number of connections: 0 Current no. of conns: 0 Current no. of operations: 0 168

169 Stopping MaxScale Methods to Stop MaxScale Stop MaxScale with at the Command- Line (i.e., systemctl or service) Stop within MaxAdmin with SHUTDOWN MAXSCALE systemctl stop maxscale.service Executed from Command-Line MaxScale> SHUTDOWN MAXSCALE Accessed by running maxadmin 169

170 Disk Based Solutions DRBD Introduction Complex Scenarios Enterprise Cluster Back-Ups with Cluster MaxScale Installation DRBD Monitoring Overview Semi-Synch Plugin Cluster Configuration MariaDB MaxScale Disk Based Solutions Shared Disk Clustering MariaDB Replication Replication Manager Cluster Schema Changes MaxScale with Replication DRBD Configuration Conclusion

171 Distributed Replicated Block Device DRBD is a Linux Kernel Module, Providing Synchronous Replication of a Block Device between Two Servers Hot Spare Server If Primary Server Fails, Secondary Server is Used Immediately and Seamlessly DRBD Disk Writes over Network slows MariaDB 171

172 Overview Distributed Replicated Block Device Replicates a Linux Virtual Block Device between Servers Requires a Kernel Module to be Installed Transparent to mysqld as it is under the filesystem Operates in Real Time, Synchronously or Async Using DRBD Affect MariaDB's Write Performance Every fsync Requires a Network Hop in sync mode For Many Applications with High-Read and Low-Write Traffic, the Write Performance drain is Minor 172

173 Replication Modes Single Primary Commonly used for MariaDB Only One Server may Manipulate Data Used with any Conventional File System (ext3/4, xfs, etc.) Used with any Storage Engine Dual Primary Rarely used with MariaDB Both Servers may Manipulate Data, Concurrently Requires a Cluster-Aware File System with a Distributed Lock System (GFS, OCFS2) Not All Storage Engines Supported 173

174 Network Protocols A Asynchronous Replication Write Operation Returns after Local Disk Sync, without waiting for Remote Node to Receive Data Not Crash Safe B Semi-Synchronous Replication Write Operation Returns after Remote Node has Received Replicated Data, but not necessarily synced it to disk Fairly Safe, except for Simultaneous Power failure C Fully Synchronous Replication Write Operation Returns only after Remote Node Received Replicated Data and also Synced to Disk Only Crash Safe Option since Data is Written in Two Places 174

175 Meta Data Size of the DRBD Block Device Generation Identifier (Cluster Maintenance) Activity Log and Quick-Sync Bitmap Storage Methods: Internally on Same Block Device Externally on a Dedicated Block Device Can Improve Write Latency, but generally Requires a Separate Physical Drive Can Complicate Recovery Process, but if Drive Fails, Both Data and Meta Data Not Lost 175

176 Resources Each Replicated Device is Part of a DRBD Resource Each Resource has a Primary or Secondary Role resource name block device disk config network config user defined name linux virtual bock device to be replicated local copy of disk data, plus meta data peer node details, transfer rates, etc 176

177 Node Roles Node States are Visible in /proc/drbd Primary Secondary Unknown Node Permits Reads and Writes Node Prohibits Reads and Writes Displayed for Other Node when Communication Off-line Occurs Only on One Node at a Time, Unless in Dual-Primary Mode Node Receives Updates from Primary Partner Never Displayed for Local Node May Occur Simultaneously on Multiple Nodes 177

178 Connection States BrokenPipe Connected Disconnecting NetworkFailure PausedSyncS PausedSyncT ProtocolError StandAlone StartingSyncS SyncSource SyncTarget TearDown Timeout Unconnected VerifyS VerifyT WFBitMapS WFBitMapT WFConnection WFReportParams WFSyncUUID 178

179 Disk States Attaching Consistent DiskLess DUnknown Inconsistent Negotiating Outdated UpToDate Failed 179

180 Tools and Info drbdadm drbdsetup drbdmeta drbd-overview.pl /proc/drbd High-Level Administration Tool Configuration of the DRBD module that has been loaded into the running kernel Create, dump, restore, and modify DRBD's meta data structures Human readable overview of all configured resources Real time status information provided by the kernel module 180

181 DRBD Configuration Introduction Complex Scenarios Enterprise Cluster Back-Ups with Cluster MaxScale Installation DRBD Monitoring Overview Semi-Synch Plugin Cluster Configuration MariaDB MaxScale Disk Based Solutions Shared Disk Clustering MariaDB Replication Replication Manager Cluster Schema Changes MaxScale with Replication DRBD Configuration Conclusion

182 Overview Setup DRBD Setup MySQL with datadir on DRBD Backed Partition DRBD and MySQL should Not be Configured to Auto-Start CRM will Start them Based on State of Cluster Setup Heartbeat for Service Monitoring and Failover Heartbeat should be Configured to Start Automatically Setup Pacemaker as Cluster Resource Manager Pacemaker Replaces Heartbeat 2 Cluster Stack Relies on Heartbeat 3 for Messaging Heartbeat 2 Cluster Stack can be Used Test Installations and Configurations 182

183 Configuration File Global Configuration (/etc/drbd.conf) for DRBD Resources Configuration File should be Identical on All Nodes Contained typically in Multiple Files in /etc/drbd.d/ Directory include "/etc/drbd.d/global_common.conf"; include "/etc/drbd.d/*.res"; global { } section DRBD General Settings common { } section Defaults for All Resources resource { } section(s) Resource Specific Settings 183

184 Storage All Nodes need Same Amount of Physical Storage Allocate Account Space for Meta Data Several Types allowed for Virtual Block Device Physical Hard Drive Partition, Software RAID Device, LVM, etc. Don t Use a Loop Device 184

185 Network Use the Fastest and Most Direct Connection Gigabit Ethernet is generally a Minimum Requirement A Crossover Cable is Better than a LAN with a Switch Other Methods like DMI are Better Long-Distance Replication Not Recommended, unless DRBD Proxy in Use Consider Network Security DRBD uses Two-Way TCP Connections, usually on Ports starting from 7788 Ensure Firewall Allows Traffic is not encrypted -- who is listening? 185

186 Resource Simple DRBD Resource Configuration Example Two Hosts in Cluster Protocol C is Fully Synchronous Nodes have Same Hardware Configuration Meta Data is Internal (i.e., some /dev/sda7 Space Lost) All Options can be Configured on Node- Specific Basis resource mysql { protocol C; device /dev/drbd1; disk /dev/sda7; meta-disk internal; on node1 {address :7789;} on node2 {address :7789;} } For Fail Over and Testing, Nodes in Cluster should be as Similar 186

187 Setup Process Initialize Resource Create Meta Data drbdadm create-md resource Attach Device drbdadm attach resource Connect Nodes drbdadm connect resource Contents of /proc/drbd version: (api:88/proto:86-89) GIT-hash: [..snip..] build by :02:26 1: cs:connected ro:secondary/secondary ds:inconsistent/inconsistent C r ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos: Node State Secondary and Disk State Inconsistent 187

188 Setup Process Start Initial Full Device Synchronization Choose a Node to be Primary First Possibly a Node which wanted Data drbdadm -- --overwrite-data-of-peer primary resource Monitor Progress in /proc/drbd Time Consuming Make a File System on Blank Primary Devices mkfs -t ext

189 MariaDB Configuration Point datadir to a Location on DRBD Device Use mysql_install_db for Fresh Instance Move Instead an Existing datadir in Place stop mysqld Ensure MariaDB is Configured Identically on both Nodes for Fail Over mysqld will know what to Expect and Location of Data Directory Start MariaDB 189

190 Other Factors Resource Synchronization Rate Prevents Network Saturation if Relevant on a Shared Network resource mysql { syncer { rate: 40M; } } Battery Backed Disk Controller is Useful Increases Speed of Local Syncs Can Increases Speed of Protocol C by Allowing Nodes to Report sooner a Synchronization 190

191 Split Brain Set the Resource Split Brain Behavior Any Local Executable or Script Allowed resource mysql { handlers { split-brain: script; } } DRBD includes an Example Script to Send split-brain: "/usr/lib/drbd/notify-split-brain.sh sysop; DRBD Split Brain Response can be Automatic Depends on Number of Nodes claiming to be Primary Configured in net { } Section of drbd.conf Manual or Cluster Resource Management System Intervention is Required in Some Situations 191

192 Split Brain Automatic Response Response Policy depends on the Number of Primary Nodes Zero Primaries One Primary Two Primaries Defined in drdb.conf net{} after-sb-0pri after-sb-1pri after-sb-2pri discard-younger-primary disconnect yes yes yes discard-least-changes discard-zero-changes consensus yes yes yes call-pri-lost-after-sb yes yes discard-secondary yes yes 192

193 Split Brain Manual Response Select a Victim Node to Recover Manually a Split-Brain drbdadm secondary resource drbdadm -- --discard-my-data connect resource Select the Surviving Node drbdadm connect resource Verify that Victim Started Re-Synchronization Process cat /proc/drbd 193

194 DRBD Monitoring Introduction Complex Scenarios Enterprise Cluster Back-Ups with Cluster MaxScale Installation DRBD Monitoring Overview Semi-Synch Plugin Cluster Configuration MariaDB MaxScale Disk Based Solutions Shared Disk Clustering MariaDB Replication Replication Manager Cluster Schema Changes MaxScale with Replication DRBD Configuration Conclusion

195 Heartbeat Part of Linux-HA Project Available on most Linux Distributions A Daemon providing Clustering Infrastructure Communication Messaging Layer, UDP over IPv4 or Serial Links Membership Ensuring All Nodes Talk or Resolve Communication Problems Works with an External Cluster Resource Manager In Heartbeat 2, there is a Built in CRM Heartbeat 3+ works usually with Pacemaker or Corosync The CRM Starts and Stops Services IP Addresses, Apache, mysqld, drbd, etc. 195

196 Pacemaker A Cluster Resource Manager Replacement to Heartbeat 2 CRM Continuation, Not a Fork, of Heartbeat 2 CRM Code Base Uses Heartbeat 3 as Messaging and Membership Layer OpenAIS is an Alternative Detects Service Failures, Initiates Recovery or Fail Over Supports Active/Active and Active/Passive Clusters Supports STONITH, for Scenarios requiring Node Fencing (e.g., DRBD Split-Brain) Embedded Cluster Command Shell for easy Administration 196

197 Shared Disk Clustering Introduction Complex Scenarios Enterprise Cluster Back-Ups with Cluster MaxScale Installation DRBD Monitoring Overview Semi-Synch Plugin Cluster Configuration MariaDB MaxScale Disk Based Solutions Shared Disk Clustering MariaDB Replication Replication Manager Cluster Schema Changes MaxScale with Replication DRBD Configuration Conclusion

198 Shared-Disk Architecture Active Passive Replication Failover requires MariaDB Crash Recovery Often File System Crash Recovery Non-Transactional Only MyISAM Storage Engine Combined with Pacemaker/Heartbeat for Auto Failover Virtual IP often used to Fail Over In theory the SAN is a SPOF 198

MariaDB Enterprise Cluster. MariaDB Training

MariaDB Enterprise Cluster. MariaDB Training MariaDB Enterprise Cluster MariaDB Training Introduction MariaDB Enterprise Cluster Introduction State Transfer Schema Changes Load Balancing Architecture Installation Caveats Multi-Master Conflicts Back-Ups

More information

High availability with MariaDB TX: The definitive guide

High availability with MariaDB TX: The definitive guide High availability with MariaDB TX: The definitive guide MARCH 2018 Table of Contents Introduction - Concepts - Terminology MariaDB TX High availability - Master/slave replication - Multi-master clustering

More information

MySQL Replication Advanced Features In 20 minutes

MySQL Replication Advanced Features In 20 minutes MySQL Replication Advanced Features In 20 minutes Peter Zaitsev, CEO FOSDEM, Brussels, Belgium February 2nd, 2019 1 Question #1 Who in this room is using some kind of MySQL Replication? 2 Question #2 Which

More information

Migrating to XtraDB Cluster 2014 Edition

Migrating to XtraDB Cluster 2014 Edition Migrating to XtraDB Cluster 2014 Edition Jay Janssen Managing Consultant Overview of XtraDB Cluster Percona Server + Galera Cluster of Innodb nodes Readable and Writable Virtually Synchronous All data

More information

Choosing a MySQL HA Solution Today

Choosing a MySQL HA Solution Today Choosing a MySQL HA Solution Today Choosing the best solution among a myriad of options. Michael Patrick Technical Account Manager at Percona The Evolution of HA in MySQL Blasts from the past Solutions

More information

Percona XtraDB Cluster MySQL Scaling and High Availability with PXC 5.7 Tibor Korocz

Percona XtraDB Cluster MySQL Scaling and High Availability with PXC 5.7 Tibor Korocz Percona XtraDB Cluster MySQL Scaling and High Availability with PXC 5.7 Tibor Korocz Architect Percona University Budapest 2017.05.11 1 2016 Percona Scaling and High Availability (application) 2 Scaling

More information

MySQL usage of web applications from 1 user to 100 million. Peter Boros RAMP conference 2013

MySQL usage of web applications from 1 user to 100 million. Peter Boros RAMP conference 2013 MySQL usage of web applications from 1 user to 100 million Peter Boros RAMP conference 2013 Why MySQL? It's easy to start small, basic installation well under 15 minutes. Very popular, supported by a lot

More information

FromDual Annual Company Meeting

FromDual Annual Company Meeting FromDual Annual Company Meeting Athens, 2013 Galera Cluster for MySQL http:// 1 / 26 About FromDual GmbH (LLC) FromDual provides neutral and independent: Consulting for MySQL Support for MySQL and Galera

More information

Percona XtraDB Cluster

Percona XtraDB Cluster Percona XtraDB Cluster Ensure High Availability Presenter Karthik P R CEO Mydbops www.mydbops.com info@mydbops.com Mydbops Mydbops is into MySQL/MongoDB Support and Consulting. It is founded by experts

More information

Choosing a MySQL HA Solution Today. Choosing the best solution among a myriad of options

Choosing a MySQL HA Solution Today. Choosing the best solution among a myriad of options Choosing a MySQL HA Solution Today Choosing the best solution among a myriad of options Questions...Questions...Questions??? How to zero in on the right solution You can t hit a target if you don t have

More information

MySQL High Availability

MySQL High Availability MySQL High Availability And other stuff worth talking about Peter Zaitsev CEO Moscow MySQL Users Group Meetup July 11 th, 2017 1 Few Words about Percona 2 Percona s Purpose To Champion Unbiased Open Source

More information

Percona XtraDB Cluster 5.7 Enhancements Performance, Security, and More

Percona XtraDB Cluster 5.7 Enhancements Performance, Security, and More Percona XtraDB Cluster 5.7 Enhancements Performance, Security, and More Michael Coburn, Product Manager, PMM Percona Live Dublin 2017 1 Your Presenter Product Manager for PMM (Percona Monitoring and Management)

More information

MySQL High Availability. Michael Messina Senior Managing Consultant, Rolta-AdvizeX /

MySQL High Availability. Michael Messina Senior Managing Consultant, Rolta-AdvizeX / MySQL High Availability Michael Messina Senior Managing Consultant, Rolta-AdvizeX mmessina@advizex.com / mike.messina@rolta.com Introduction Michael Messina Senior Managing Consultant Rolta-AdvizeX, Working

More information

Choosing a MySQL High Availability Solution. Marcos Albe, Percona Inc. Live Webinar June 2017

Choosing a MySQL High Availability Solution. Marcos Albe, Percona Inc. Live Webinar June 2017 Choosing a MySQL High Availability Solution Marcos Albe, Percona Inc. Live Webinar June 2017 Agenda What is availability Components to build an HA solution HA options in the MySQL ecosystem Failover/Routing

More information

Lessons from database failures

Lessons from database failures Lessons from database failures Colin Charles, Chief Evangelist, Percona Inc. colin.charles@percona.com / byte@bytebot.net http://www.bytebot.net/blog/ @bytebot on Twitter Percona Webminar 18 January 2017

More information

MySQL Group Replication. Bogdan Kecman MySQL Principal Technical Engineer

MySQL Group Replication. Bogdan Kecman MySQL Principal Technical Engineer MySQL Group Replication Bogdan Kecman MySQL Principal Technical Engineer Bogdan.Kecman@oracle.com 1 Safe Harbor Statement The following is intended to outline our general product direction. It is intended

More information

Aurora, RDS, or On-Prem, Which is right for you

Aurora, RDS, or On-Prem, Which is right for you Aurora, RDS, or On-Prem, Which is right for you Kathy Gibbs Database Specialist TAM Katgibbs@amazon.com Santa Clara, California April 23th 25th, 2018 Agenda RDS Aurora EC2 On-Premise Wrap-up/Recommendation

More information

MySQL High Availability Solutions. Alex Poritskiy Percona

MySQL High Availability Solutions. Alex Poritskiy Percona MySQL High Availability Solutions Alex Poritskiy Percona The Five 9s of Availability Clustering & Geographical Redundancy Clustering Technologies Replication Technologies Well-Managed disasters power failures

More information

MySQL Replication : advanced features in all flavours. Giuseppe Maxia Quality Assurance Architect at

MySQL Replication : advanced features in all flavours. Giuseppe Maxia Quality Assurance Architect at MySQL Replication : advanced features in all flavours Giuseppe Maxia Quality Assurance Architect at VMware @datacharmer 1 About me Who s this guy? Giuseppe Maxia, a.k.a. "The Data Charmer" QA Architect

More information

High Availability Solutions for the MySQL Database

High Availability Solutions for the MySQL Database www.skysql.com High Availability Solutions for the MySQL Database Introduction This paper introduces recommendations and some of the solutions used to create an availability or high availability environment

More information

G a l e r a C l u s t e r Schema Upgrades

G a l e r a C l u s t e r Schema Upgrades G a l e r a C l u s t e r Schema Upgrades Seppo Jaakola Codership Agenda Galera Cluster Overview DDL vs DML Demo of DDL Replication in Galera Cluster Rolling Schema Upgrade (RSU) Total Order Isolation

More information

Consistent Reads Using ProxySQL and GTID. Santa Clara, California April 23th 25th, 2018

Consistent Reads Using ProxySQL and GTID. Santa Clara, California April 23th 25th, 2018 Consistent Reads Using ProxySQL and GTID Santa Clara, California April 23th 25th, 2018 Disclaimer I am not René Cannaò @lefred MySQL Community Manager / Oracle the one who provided a hint for this not

More information

HA for OpenStack: Connecting the dots

HA for OpenStack: Connecting the dots HA for OpenStack: Connecting the dots Raghavan Rags Srinivas Rackspace OpenStack Meetup, Washington DC on Jan. 23 rd 2013 Rags Solutions Architect at Rackspace for OpenStack-based Rackspace Private Cloud

More information

MariaDB MaxScale 2.0, basis for a Two-speed IT architecture

MariaDB MaxScale 2.0, basis for a Two-speed IT architecture MariaDB MaxScale 2.0, basis for a Two-speed IT architecture Harry Timm, Business Development Manager harry.timm@mariadb.com Telef: +49-176-2177 0497 MariaDB FASTEST GROWING OPEN SOURCE DATABASE * Innovation

More information

MySQL HA Solutions. Keeping it simple, kinda! By: Chris Schneider MySQL Architect Ning.com

MySQL HA Solutions. Keeping it simple, kinda! By: Chris Schneider MySQL Architect Ning.com MySQL HA Solutions Keeping it simple, kinda! By: Chris Schneider MySQL Architect Ning.com What we ll cover today High Availability Terms and Concepts Levels of High Availability What technologies are there

More information

HA solution with PXC-5.7 with ProxySQL. Ramesh Sivaraman Krunal Bauskar

HA solution with PXC-5.7 with ProxySQL. Ramesh Sivaraman Krunal Bauskar HA solution with PXC-5.7 with ProxySQL Ramesh Sivaraman Krunal Bauskar Agenda What is Good HA eco-system? Understanding PXC-5.7 Understanding ProxySQL PXC + ProxySQL = Complete HA solution Monitoring using

More information

1 Copyright 2011, Oracle and/or its affiliates. All rights reserved. Insert Information Protection Policy Classification from Slide 8

1 Copyright 2011, Oracle and/or its affiliates. All rights reserved. Insert Information Protection Policy Classification from Slide 8 1 Copyright 2011, Oracle and/or its affiliates. All rights reserved. Insert Information Protection Policy Classification from Slide 8 ADVANCED MYSQL REPLICATION ARCHITECTURES Luís

More information

MySQL Replication Options. Peter Zaitsev, CEO, Percona Moscow MySQL User Meetup Moscow,Russia

MySQL Replication Options. Peter Zaitsev, CEO, Percona Moscow MySQL User Meetup Moscow,Russia MySQL Replication Options Peter Zaitsev, CEO, Percona Moscow MySQL User Meetup Moscow,Russia Few Words About Percona 2 Your Partner in MySQL and MongoDB Success 100% Open Source Software We work with MySQL,

More information

Performance comparisons and trade-offs for various MySQL replication schemes

Performance comparisons and trade-offs for various MySQL replication schemes Performance comparisons and trade-offs for various MySQL replication schemes Darpan Dinker VP Engineering Brian O Krafka, Chief Architect Schooner Information Technology, Inc. http://www.schoonerinfotech.com/

More information

ProxySQL - GTID Consistent Reads. Adaptive query routing based on GTID tracking

ProxySQL - GTID Consistent Reads. Adaptive query routing based on GTID tracking ProxySQL - GTID Consistent Reads Adaptive query routing based on GTID tracking Introduction Rene Cannao Founder of ProxySQL MySQL DBA Introduction Nick Vyzas ProxySQL Committer MySQL DBA What is ProxySQL?

More information

Migrating to Aurora MySQL and Monitoring with PMM. Percona Technical Webinars August 1, 2018

Migrating to Aurora MySQL and Monitoring with PMM. Percona Technical Webinars August 1, 2018 Migrating to Aurora MySQL and Monitoring with PMM Percona Technical Webinars August 1, 2018 Introductions Introduction Vineet Khanna (Autodesk) Senior Database Engineer vineet.khanna@autodesk.com Tate

More information

MySQL HA Solutions Selecting the best approach to protect access to your data

MySQL HA Solutions Selecting the best approach to protect access to your data MySQL HA Solutions Selecting the best approach to protect access to your data Sastry Vedantam sastry.vedantam@oracle.com February 2015 Copyright 2015, Oracle and/or its affiliates. All rights reserved

More information

What s new in Percona Xtradb Cluster 5.6. Jay Janssen Lead Consultant February 5th, 2014

What s new in Percona Xtradb Cluster 5.6. Jay Janssen Lead Consultant February 5th, 2014 What s new in Percona Xtradb Cluster 5.6 Jay Janssen Lead Consultant February 5th, 2014 Overview PXC 5.6 is the aggregation of Percona Server 5.6 Codership MySQL 5.6 patches Galera 3.x Agenda Major new

More information

MySQL Replication. Rick Golba and Stephane Combaudon April 15, 2015

MySQL Replication. Rick Golba and Stephane Combaudon April 15, 2015 MySQL Replication Rick Golba and Stephane Combaudon April 15, 2015 Agenda What is, and what is not, MySQL Replication Replication Use Cases Types of replication Replication lag Replication errors Replication

More information

Switching to Innodb from MyISAM. Matt Yonkovit Percona

Switching to Innodb from MyISAM. Matt Yonkovit Percona Switching to Innodb from MyISAM Matt Yonkovit Percona -2- DIAMOND SPONSORSHIPS THANK YOU TO OUR DIAMOND SPONSORS www.percona.com -3- Who We Are Who I am Matt Yonkovit Principal Architect Veteran of MySQL/SUN/Percona

More information

Operational DBA In a Nutshell - HandsOn Reference Guide

Operational DBA In a Nutshell - HandsOn Reference Guide 1/12 Operational DBA In a Nutshell - HandsOn Reference Guide Contents 1 Operational DBA In a Nutshell 2 2 Installation of MySQL 2 2.1 Setting Up Our VM........................................ 2 2.2 Installation

More information

MySQL Group Replication & MySQL InnoDB Cluster

MySQL Group Replication & MySQL InnoDB Cluster MySQL Group Replication & MySQL InnoDB Cluster Production Ready? Kenny Gryp productions Table of Contents Group Replication MySQL Shell (AdminAPI) MySQL Group Replication MySQL Router Best Practices Limitations

More information

Percona XtraDB Cluster ProxySQL. For your high availability and clustering needs

Percona XtraDB Cluster ProxySQL. For your high availability and clustering needs Percona XtraDB Cluster-5.7 + ProxySQL For your high availability and clustering needs Ramesh Sivaraman Krunal Bauskar Agenda What is Good HA eco-system? Understanding PXC-5.7 Understanding ProxySQL PXC

More information

MySQL Replication Update

MySQL Replication Update MySQL Replication Update Lars Thalmann Development Director MySQL Replication, Backup & Connectors OSCON, July 2011 MySQL Releases MySQL 5.1 Generally Available, November 2008 MySQL

More information

Backup & Restore. Maximiliano Bubenick Sr Remote DBA

Backup & Restore. Maximiliano Bubenick Sr Remote DBA Backup & Restore Maximiliano Bubenick Sr Remote DBA Agenda Why backups? Backup Types Raw Backups Logical Backups Binlog mirroring Backups Locks Tips Why Backups? Why Backups? At some point something will

More information

Diagnosing Failures in MySQL Replication

Diagnosing Failures in MySQL Replication Diagnosing Failures in MySQL Replication O'Reilly MySQL Conference Santa Clara, CA Devananda Deva van der Veen -2- Introduction About Me Sr Consultant at Percona since summer 2009 Working with large MySQL

More information

Understanding Percona XtraDB Cluster 5.7 Operation and Key Algorithms. Krunal Bauskar PXC Product Lead (Percona Inc.)

Understanding Percona XtraDB Cluster 5.7 Operation and Key Algorithms. Krunal Bauskar PXC Product Lead (Percona Inc.) Understanding Percona XtraDB Cluster 5.7 Operation and Key Algorithms Krunal Bauskar PXC Product Lead (Percona Inc.) Objective I want to use Percona XtraDB Cluster but is it suitable for my needs and can

More information

Mysql Cluster Global Schema Lock

Mysql Cluster Global Schema Lock Mysql Cluster Global Schema Lock This definitely was not the case with MySQL Cluster 7.3.x. (Warning) NDB: Could not acquire global schema lock (4009)Cluster Failure 2015-03-25 14:51:53. Using High-Speed

More information

2) One of the most common question clients asks is HOW the Replication works?

2) One of the most common question clients asks is HOW the Replication works? Replication =============================================================== 1) Before setting up a replication, it could be important to have a clear idea on the why you are setting up a MySQL replication.

More information

What s New in MySQL 5.7 Geir Høydalsvik, Sr. Director, MySQL Engineering. Copyright 2015, Oracle and/or its affiliates. All rights reserved.

What s New in MySQL 5.7 Geir Høydalsvik, Sr. Director, MySQL Engineering. Copyright 2015, Oracle and/or its affiliates. All rights reserved. What s New in MySQL 5.7 Geir Høydalsvik, Sr. Director, MySQL Engineering Safe Harbor Statement The following is intended to outline our general product direction. It is intended for information purposes

More information

Everything You Need to Know About MySQL Group Replication

Everything You Need to Know About MySQL Group Replication Everything You Need to Know About MySQL Group Replication Luís Soares (luis.soares@oracle.com) Principal Software Engineer, MySQL Replication Lead Copyright 2017, Oracle and/or its affiliates. All rights

More information

BERLIN. 2015, Amazon Web Services, Inc. or its affiliates. All rights reserved

BERLIN. 2015, Amazon Web Services, Inc. or its affiliates. All rights reserved BERLIN 2015, Amazon Web Services, Inc. or its affiliates. All rights reserved Amazon Aurora: Amazon s New Relational Database Engine Carlos Conde Technology Evangelist @caarlco 2015, Amazon Web Services,

More information

Oracle Exam 1z0-883 MySQL 5.6 Database Administrator Version: 8.0 [ Total Questions: 100 ]

Oracle Exam 1z0-883 MySQL 5.6 Database Administrator Version: 8.0 [ Total Questions: 100 ] s@lm@n Oracle Exam 1z0-883 MySQL 5.6 Database Administrator Version: 8.0 [ Total Questions: 100 ] Oracle 1z0-883 : Practice Test Question No : 1 Consider the Mysql Enterprise Audit plugin. You are checking

More information

Datacenter replication solution with quasardb

Datacenter replication solution with quasardb Datacenter replication solution with quasardb Technical positioning paper April 2017 Release v1.3 www.quasardb.net Contact: sales@quasardb.net Quasardb A datacenter survival guide quasardb INTRODUCTION

More information

CO MySQL for Database Administrators

CO MySQL for Database Administrators CO-61762 MySQL for Database Administrators Summary Duration 5 Days Audience Administrators, Database Designers, Developers Level Professional Technology Oracle MySQL 5.5 Delivery Method Instructor-led

More information

Introduction To MySQL Replication. Kenny Gryp Percona Live Washington DC /

Introduction To MySQL Replication. Kenny Gryp Percona Live Washington DC / Introduction To MySQL Replication Kenny Gryp Percona Live Washington DC / 2012-01-11 MySQL Replication Replication Overview Binary Logs Setting Up Replication Commands Other Common

More information

Highly Available Database Architectures in AWS. Santa Clara, California April 23th 25th, 2018 Mike Benshoof, Technical Account Manager, Percona

Highly Available Database Architectures in AWS. Santa Clara, California April 23th 25th, 2018 Mike Benshoof, Technical Account Manager, Percona Highly Available Database Architectures in AWS Santa Clara, California April 23th 25th, 2018 Mike Benshoof, Technical Account Manager, Percona Hello, Percona Live Attendees! What this talk is meant to

More information

MySQL Architecture Design Patterns for Performance, Scalability, and Availability

MySQL Architecture Design Patterns for Performance, Scalability, and Availability MySQL Architecture Design Patterns for Performance, Scalability, and Availability Brian Miezejewski Principal Manager Consulting Alexander Rubin Principal Consultant Agenda HA and

More information

The Exciting MySQL 5.7 Replication Enhancements

The Exciting MySQL 5.7 Replication Enhancements The Exciting MySQL 5.7 Replication Enhancements Luís Soares (luis.soares@oracle.com) Principal Software Engineer, MySQL Replication Team Lead Copyright 2016, Oracle and/or its affiliates. All rights reserved.

More information

Become a MongoDB Replica Set Expert in Under 5 Minutes:

Become a MongoDB Replica Set Expert in Under 5 Minutes: Become a MongoDB Replica Set Expert in Under 5 Minutes: USING PERCONA SERVER FOR MONGODB IN A FAILOVER ARCHITECTURE This solution brief outlines a way to run a MongoDB replica set for read scaling in production.

More information

Binlog Servers (and MySQL) at Booking.com. Jean-François Gagné jeanfrancois DOT gagne AT booking.com Presented at Percona Live Santa Clara 2015

Binlog Servers (and MySQL) at Booking.com. Jean-François Gagné jeanfrancois DOT gagne AT booking.com Presented at Percona Live Santa Clara 2015 Binlog Servers (and MySQL) at Booking.com Jean-François Gagné jeanfrancois DOT gagne AT booking.com Presented at Percona Live Santa Clara 2015 Booking.com 2 Booking.com Based in Amsterdam since 1996 Online

More information

Reliable Crash Detection and Failover with Orchestrator

Reliable Crash Detection and Failover with Orchestrator 1 Reliable Crash Detection and Failover with Orchestrator Shlomi Noach, PerconaLive 2016 " How people build software Agenda Orchestrator Topologies, crash scenarios Crash detection methods Promotion complexity

More information

MySQL for Database Administrators Ed 3.1

MySQL for Database Administrators Ed 3.1 Oracle University Contact Us: 1.800.529.0165 MySQL for Database Administrators Ed 3.1 Duration: 5 Days What you will learn The MySQL for Database Administrators training is designed for DBAs and other

More information

Enterprise Open Source Databases

Enterprise Open Source Databases Enterprise Open Source Databases WHITE PAPER MariaDB vs. Oracle MySQL vs. EnterpriseDB MariaDB TX Born of the community. Raised in the enterprise. MariaDB TX, with a history of proven enterprise reliability

More information

Which technology to choose in AWS?

Which technology to choose in AWS? Which technology to choose in AWS? RDS / Aurora / Roll-your-own April 17, 2018 Daniel Kowalewski Senior Technical Operations Engineer Percona 1 2017 Percona AWS MySQL options RDS for MySQL Aurora MySQL

More information

High Noon at AWS. ~ Amazon MySQL RDS versus Tungsten Clustering running MySQL on AWS EC2

High Noon at AWS. ~ Amazon MySQL RDS versus Tungsten Clustering running MySQL on AWS EC2 High Noon at AWS ~ Amazon MySQL RDS versus Tungsten Clustering running MySQL on AWS EC2 Introduction Amazon Web Services (AWS) are gaining popularity, and for good reasons. The Amazon Relational Database

More information

MySQL at Scale at Square

MySQL at Scale at Square MySQL at Scale at Square Bill Karwin, Square Inc. October, 2018 1 Square An honest financial network for everyone Global: USA, Canada, UK, Japan, Australia Payment transaction data stored in MySQL We are

More information

MariaDB MaxScale 2.0 and ColumnStore 1.0 for the Boston MySQL Meetup Group Jon Day, Solution Architect - MariaDB

MariaDB MaxScale 2.0 and ColumnStore 1.0 for the Boston MySQL Meetup Group Jon Day, Solution Architect - MariaDB MariaDB MaxScale 2.0 and ColumnStore 1.0 for the Boston MySQL Meetup Group Jon Day, Solution Architect - MariaDB 2016 MariaDB Corporation Ab 1 Tonight s Topics: MariaDB MaxScale 2.0 Currently in Beta MariaDB

More information

Effective Testing for Live Applications. March, 29, 2018 Sveta Smirnova

Effective Testing for Live Applications. March, 29, 2018 Sveta Smirnova Effective Testing for Live Applications March, 29, 2018 Sveta Smirnova Table of Contents Sometimes You Have to Test on Production Wrong Data SELECT Returns Nonsense Wrong Data in the Database Performance

More information

Setting Up Master-Master Replication With MySQL 5 On Debian Etch

Setting Up Master-Master Replication With MySQL 5 On Debian Etch By Falko Timme Published: 2007-10-23 18:03 Setting Up Master-Master Replication With MySQL 5 On Debian Etch Version 1.0 Author: Falko Timme Last edited 10/15/2007 Since version

More information

Percona XtraDB Cluster powered by Galera. Peter Zaitsev CEO, Percona Slide Credits: Vadim Tkachenko Percona University, Washington,DC Sep 12,2013

Percona XtraDB Cluster powered by Galera. Peter Zaitsev CEO, Percona Slide Credits: Vadim Tkachenko Percona University, Washington,DC Sep 12,2013 powered by Galera Peter Zaitsev CEO, Percona Slide Credits: Vadim Tkachenko Percona University, Washington,DC Sep 12,2013 This talk High Availability Replication Cluster What is HA Availability Avail ~

More information

New Replication Features MySQL 5.1 and MySQL 6.0/5.4

New Replication Features MySQL 5.1 and MySQL 6.0/5.4 2009 04 21 Lars Thalmann & Mats Kindahl New Replication Features www.mysql.com 1 New Replication Features MySQL 5.1 and MySQL 6.0/5.4 Dr. Lars Thalmann Development Manager, Replication & Backup lars@mysql.com

More information

Copyright 2012, Oracle and/or its affiliates. All rights reserved. Insert Information Protection Policy Classification from Slide 12

Copyright 2012, Oracle and/or its affiliates. All rights reserved. Insert Information Protection Policy Classification from Slide 12 1 MySQL : 5.6 the Next Generation Lynn Ferrante Principal Consultant, Technical Sales Engineering Northern California Oracle Users Group November 2012 2 Safe Harbor Statement The

More information

MarkLogic Server. Database Replication Guide. MarkLogic 9 May, Copyright 2017 MarkLogic Corporation. All rights reserved.

MarkLogic Server. Database Replication Guide. MarkLogic 9 May, Copyright 2017 MarkLogic Corporation. All rights reserved. Database Replication Guide 1 MarkLogic 9 May, 2017 Last Revised: 9.0-3, September, 2017 Copyright 2017 MarkLogic Corporation. All rights reserved. Table of Contents Table of Contents Database Replication

More information

1Z Oracle. MySQL 5 Database Administrator Certified Professional Part I

1Z Oracle. MySQL 5 Database Administrator Certified Professional Part I Oracle 1Z0-873 MySQL 5 Database Administrator Certified Professional Part I Download Full Version : http://killexams.com/pass4sure/exam-detail/1z0-873 A. Use the --log-queries-indexes option. B. Use the

More information

Support for replication is built into MySQL. There are no special add-ins or applications to install.

Support for replication is built into MySQL. There are no special add-ins or applications to install. Updates made to one database copy are automatically propagated to all the other replicas. Generally, one of the replicas is designated as the master where Updates are directed to the master while read

More information

Replication features of 2011

Replication features of 2011 FOSDEM 2012 Replication features of 2011 What they were How to get them How to use them Sergey Petrunya MariaDB MySQL Replication in 2011: overview Notable events, chronologically: MySQL 5.5 GA (Dec 2010)

More information

Kenny Gryp. Ramesh Sivaraman. MySQL Practice Manager. QA Engineer 2 / 60

Kenny Gryp. Ramesh Sivaraman. MySQL Practice Manager. QA Engineer 2 / 60 Group Replication Us Ramesh Sivaraman Kenny Gryp QA Engineer MySQL Practice Manager 2 / 60 Table of Contents 1. Overview 2. Similarities 3. Differences GR & Galera 4. Differences PXC & Galera 5. Limitations

More information

Distributed Data Management Replication

Distributed Data Management Replication Felix Naumann F-2.03/F-2.04, Campus II Hasso Plattner Institut Distributing Data Motivation Scalability (Elasticity) If data volume, processing, or access exhausts one machine, you might want to spread

More information

How to setup Orchestrator to manage thousands of MySQL servers. Simon J Mudd 3 rd October 2017

How to setup Orchestrator to manage thousands of MySQL servers. Simon J Mudd 3 rd October 2017 How to setup Orchestrator to manage thousands of MySQL servers Simon J Mudd 3 rd October 2017 Session Summary What is orchestrator and why use it? What happens as you monitor more servers? Features added

More information

Design Patterns for Large- Scale Data Management. Robert Hodges OSCON 2013

Design Patterns for Large- Scale Data Management. Robert Hodges OSCON 2013 Design Patterns for Large- Scale Data Management Robert Hodges OSCON 2013 The Start-Up Dilemma 1. You are releasing Online Storefront V 1.0 2. It could be a complete bust 3. But it could be *really* big

More information

ITS. MySQL for Database Administrators (40 Hours) (Exam code 1z0-883) (OCP My SQL DBA)

ITS. MySQL for Database Administrators (40 Hours) (Exam code 1z0-883) (OCP My SQL DBA) MySQL for Database Administrators (40 Hours) (Exam code 1z0-883) (OCP My SQL DBA) Prerequisites Have some experience with relational databases and SQL What will you learn? The MySQL for Database Administrators

More information

MarkLogic Server. Database Replication Guide. MarkLogic 6 September, Copyright 2012 MarkLogic Corporation. All rights reserved.

MarkLogic Server. Database Replication Guide. MarkLogic 6 September, Copyright 2012 MarkLogic Corporation. All rights reserved. Database Replication Guide 1 MarkLogic 6 September, 2012 Last Revised: 6.0-1, September, 2012 Copyright 2012 MarkLogic Corporation. All rights reserved. Database Replication Guide 1.0 Database Replication

More information

What's new in MySQL 5.5 and 5.6 replication

What's new in MySQL 5.5 and 5.6 replication What's new in MySQL 5.5 and 5.6 replication Giuseppe Maxia Continuent, Inc Continuent 2012. 1 AGENDA 5.5 semi-synchronous replication 5.6 delayed replication server UUID crash-safe slave multi-thread slave

More information

Creating a Best-in-Class Backup and Recovery System for Your MySQL Environment. Akshay Suryawanshi DBA Team Manager,

Creating a Best-in-Class Backup and Recovery System for Your MySQL Environment. Akshay Suryawanshi DBA Team Manager, Creating a Best-in-Class Backup and Recovery System for Your MySQL Environment Akshay Suryawanshi DBA Team Manager, 2015-07-15 Agenda Why backups? Backup Types Binary or Raw Backups Logical Backups Binlog

More information

Setting up Multi-Source Replication in MariaDB 10.0

Setting up Multi-Source Replication in MariaDB 10.0 Setting up Multi-Source Replication in MariaDB 10.0 November 3, 2014 Derek Downey MySQL Principal Consultant Who am I? Web Developer and Sysadmin background MySQL DBA for 10+ years MySQL Principal Consultant

More information

MySQL for Database Administrators Ed 4

MySQL for Database Administrators Ed 4 Oracle University Contact Us: (09) 5494 1551 MySQL for Database Administrators Ed 4 Duration: 5 Days What you will learn The MySQL for Database Administrators course teaches DBAs and other database professionals

More information

Running MySQL on AWS. Michael Coburn Wednesday, April 15th, 2015

Running MySQL on AWS. Michael Coburn Wednesday, April 15th, 2015 Running MySQL on AWS Michael Coburn Wednesday, April 15th, 2015 Who am I? 2 Senior Architect with Percona 3 years on Friday! Canadian but I now live in Costa Rica I see 3-10 different customer environments

More information

MySQL Replication Tips and Tricks

MySQL Replication Tips and Tricks 2009-04-23 Lars Thalmann & Mats Kindahl Replication Tricks and Tips AB 2007-9 www.mysql.com 1 Replication Tips and Tricks Dr. Mats Kindahl Lead Developer, Replication mats@sun.com mysqlmusings.blogspot.com

More information

High Availability- Disaster Recovery 101

High Availability- Disaster Recovery 101 High Availability- Disaster Recovery 101 DBA-100 Glenn Berry, Principal Consultant, SQLskills.com Glenn Berry Consultant/Trainer/Speaker/Author Principal Consultant, SQLskills.com Email: Glenn@SQLskills.com

More information

Preventing and Resolving MySQL Downtime. Jervin Real, Michael Coburn Percona

Preventing and Resolving MySQL Downtime. Jervin Real, Michael Coburn Percona Preventing and Resolving MySQL Downtime Jervin Real, Michael Coburn Percona About Us Jervin Real, Technical Services Manager Engineer Engineering Engineers APAC Michael Coburn, Principal Technical Account

More information

How Facebook Got Consistency with MySQL in the Cloud Sam Dunster

How Facebook Got Consistency with MySQL in the Cloud Sam Dunster How Facebook Got Consistency with MySQL in the Cloud Sam Dunster Production Engineer Consistency Replication Replication for High Availability Facebook Replicaset Region A Slave Slave Region B Region

More information

Binlog Servers at Booking.com. Jean-François Gagné jeanfrancois DOT gagne AT booking.com

Binlog Servers at Booking.com. Jean-François Gagné jeanfrancois DOT gagne AT booking.com Binlog Servers at Booking.com Jean-François Gagné jeanfrancois DOT gagne AT booking.com Presented at Percona Live Amsterdam 2015 Booking.com 1 Booking.com Based in Amsterdam since 1996 Online Hotel and

More information

Introduction to MySQL InnoDB Cluster

Introduction to MySQL InnoDB Cluster 1 / 148 2 / 148 3 / 148 Introduction to MySQL InnoDB Cluster MySQL High Availability made easy Percona Live Europe - Dublin 2017 Frédéric Descamps - MySQL Community Manager - Oracle 4 / 148 Safe Harbor

More information

MySQL Replication: Pros and Cons

MySQL Replication: Pros and Cons MySQL Replication: Pros and Cons Achieve Higher Performance, Uptime, Reliability and Simplicity for Real-World Use Cases. Darpan Dinker @darpandinker VP of Engineering Schooner Information Technology Agenda

More information

Using MySQL for Distributed Database Architectures

Using MySQL for Distributed Database Architectures Using MySQL for Distributed Database Architectures Peter Zaitsev CEO, Percona SCALE 16x, Pasadena, CA March 9, 2018 1 About Percona Solutions for your success with MySQL,MariaDB and MongoDB Support, Managed

More information

MySQL Multi-Site/Multi-Master Done Right

MySQL Multi-Site/Multi-Master Done Right MySQL Multi-Site/Multi-Master Done Right MySQL Clustering for HA and DR The Dream: Multiple, active DBMS servers with identical data over distance Too good to be true? High Performance High Availability

More information

MySQL Backup Best Practices and Case Study:.IE Continuous Restore Process

MySQL Backup Best Practices and Case Study:.IE Continuous Restore Process MySQL Backup Best Practices and Case Study:.IE Continuous Restore Process Marcelo Altmann Senior Support Engineer - Percona Mick Begley Technical Service Manager - IE Domain Registry Agenda Agenda Why

More information

ZFS and MySQL on Linux, the Sweet Spots

ZFS and MySQL on Linux, the Sweet Spots ZFS and MySQL on Linux, the Sweet Spots ZFS User Conference 2018 Jervin Real 1 / 50 MySQL The World's Most Popular Open Source Database 2 / 50 ZFS Is MySQL for storage. 3 / 50 ZFS + MySQL MySQL Needs A

More information

MyRocks in MariaDB. Sergei Petrunia MariaDB Tampere Meetup June 2018

MyRocks in MariaDB. Sergei Petrunia MariaDB Tampere Meetup June 2018 MyRocks in MariaDB Sergei Petrunia MariaDB Tampere Meetup June 2018 2 What is MyRocks Hopefully everybody knows by now A storage engine based on RocksDB LSM-architecture Uses less

More information

Synergetics-Standard-SQL Server 2012-DBA-7 day Contents

Synergetics-Standard-SQL Server 2012-DBA-7 day Contents Workshop Name Duration Objective Participants Entry Profile Training Methodology Setup Requirements Hardware and Software Requirements Training Lab Requirements Synergetics-Standard-SQL Server 2012-DBA-7

More information

Geographically Dispersed Percona XtraDB Cluster Deployment. Marco (the Grinch) Tusa September 2017 Dublin

Geographically Dispersed Percona XtraDB Cluster Deployment. Marco (the Grinch) Tusa September 2017 Dublin Geographically Dispersed Percona XtraDB Cluster Deployment Marco (the Grinch) Tusa September 2017 Dublin About me Marco The Grinch Open source enthusiast Percona consulting Team Leader 2 Agenda What is

More information

Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas

Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas July 2017 2017, Amazon Web Services, Inc. or its affiliates. All rights reserved. Notices This document is provided

More information

The Hazards of Multi-writing in a Dual-Master Setup

The Hazards of Multi-writing in a Dual-Master Setup The Hazards of Multi-writing in a Dual-Master Setup Jay Janssen MySQL Consulting Lead November 15th, 2012 Explaining the Problem Rules of the Replication Road A given MySQL instance: Can be both a master

More information

MyRocks deployment at Facebook and Roadmaps. Yoshinori Matsunobu Production Engineer / MySQL Tech Lead, Facebook Feb/2018, #FOSDEM #mysqldevroom

MyRocks deployment at Facebook and Roadmaps. Yoshinori Matsunobu Production Engineer / MySQL Tech Lead, Facebook Feb/2018, #FOSDEM #mysqldevroom MyRocks deployment at Facebook and Roadmaps Yoshinori Matsunobu Production Engineer / MySQL Tech Lead, Facebook Feb/2018, #FOSDEM #mysqldevroom Agenda MySQL at Facebook MyRocks overview Production Deployment

More information