Primary-Backup Replication

Similar documents
Primary/Backup. CS6450: Distributed Systems Lecture 3/4. Ryan Stutsman

From eventual to strong consistency. Primary-Backup Replication. Primary-Backup Replication. Replication State Machines via Primary-Backup

PRIMARY-BACKUP REPLICATION

Distributed systems. Lecture 6: distributed transactions, elections, consensus and replication. Malte Schwarzkopf

Recall: Primary-Backup. State machine replication. Extend PB for high availability. Consensus 2. Mechanism: Replicate and separate servers

Time Synchronization and Logical Clocks

Causal Consistency and Two-Phase Commit

Byzantine Fault Tolerance

Extend PB for high availability. PB high availability via 2PC. Recall: Primary-Backup. Putting it all together for SMR:

The Design and Evaluation of a Practical System for Fault-Tolerant Virtual Machines

CS 138: Google. CS 138 XVI 1 Copyright 2017 Thomas W. Doeppner. All rights reserved.

Last time. Distributed systems Lecture 6: Elections, distributed transactions, and replication. DrRobert N. M. Watson

Replication in Distributed Systems

CS 138: Google. CS 138 XVII 1 Copyright 2016 Thomas W. Doeppner. All rights reserved.

Concurrency Control II and Distributed Transactions

Distributed System. Gang Wu. Spring,2018

Fault Tolerance for Highly Available Internet Services: Concept, Approaches, and Issues

Two phase commit protocol. Two phase commit protocol. Recall: Linearizability (Strong Consistency) Consensus

Consensus and related problems

Today: Fault Tolerance. Reliable One-One Communication

Time and distributed systems. Just use time stamps? Correct consistency model? Replication and Consistency

Distributed System. Gang Wu. Spring,2018

ZooKeeper & Curator. CS 475, Spring 2018 Concurrent & Distributed Systems

Today: Fault Tolerance

Transactions. CS 475, Spring 2018 Concurrent & Distributed Systems

Network File Systems

Exam 2 Review. October 29, Paul Krzyzanowski 1

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Distributed System Engineering: Spring Exam I

Recovering from a Crash. Three-Phase Commit

Failures, Elections, and Raft

CS435 Introduction to Big Data FALL 2018 Colorado State University. 11/7/2018 Week 12-B Sangmi Lee Pallickara. FAQs

Today: Fault Tolerance. Fault Tolerance

Datacenter replication solution with quasardb

Remote Procedure Call. Tom Anderson

Distributed Systems COMP 212. Lecture 19 Othon Michail

Today: Fault Tolerance. Replica Management

NFS: Naming indirection, abstraction. Abstraction, abstraction, abstraction! Network File Systems: Naming, cache control, consistency

Fault Tolerance. Goals: transparent: mask (i.e., completely recover from) all failures, or predictable: exhibit a well defined failure behavior

FLAT DATACENTER STORAGE CHANDNI MODI (FN8692)

CS 537: Introduction to Operating Systems Fall 2015: Midterm Exam #4 Tuesday, December 15 th 11:00 12:15. Advanced Topics: Distributed File Systems

CSE 124: REPLICATED STATE MACHINES. George Porter November 8 and 10, 2017

Byzantine Fault Tolerance

Process groups and message ordering

Distributed KIDS Labs 1

CS /15/16. Paul Krzyzanowski 1. Question 1. Distributed Systems 2016 Exam 2 Review. Question 3. Question 2. Question 5.

Parallel Data Types of Parallelism Replication (Multiple copies of the same data) Better throughput for read-only computations Data safety Partitionin

Replication. Feb 10, 2016 CPSC 416

Fault Tolerance Causes of failure: process failure machine failure network failure Goals: transparent: mask (i.e., completely recover from) all

Hypervisor-based Fault-tolerance. Where should RC be implemented? The Hypervisor as a State Machine. The Architecture. In hardware

CS 425 / ECE 428 Distributed Systems Fall 2017

Linearizability CMPT 401. Sequential Consistency. Passive Replication

Consistency in Distributed Systems

Recap. CSE 486/586 Distributed Systems Google Chubby Lock Service. Recap: First Requirement. Recap: Second Requirement. Recap: Strengthening P2

Network Communication and Remote Procedure Calls

CSE Lecture 11: Map/Reduce 7 October Nate Nystrom UTA

Mobile and Heterogeneous databases Distributed Database System Transaction Management. A.R. Hurson Computer Science Missouri Science & Technology

GFS Overview. Design goals/priorities Design for big-data workloads Huge files, mostly appends, concurrency, huge bandwidth Design for failures

Recall our 2PC commit problem. Recall our 2PC commit problem. Doing failover correctly isn t easy. Consensus I. FLP Impossibility, Paxos

Fault Tolerance. Distributed Systems IT332

Google File System 2

Availability versus consistency. Eventual Consistency: Bayou. Eventual consistency. Bayou: A Weakly Connected Replicated Storage System

Today: Fault Tolerance. Failure Masking by Redundancy

Distributed File Systems II

Distributed Transactions

CS 347 Parallel and Distributed Data Processing

GFS. CS6450: Distributed Systems Lecture 5. Ryan Stutsman

GFS: The Google File System. Dr. Yingwu Zhu

REPLICATED STATE MACHINES

Peer-to-Peer Systems and Distributed Hash Tables

CMU SCS CMU SCS Who: What: When: Where: Why: CMU SCS

Distributed Computation Models

6.033 Spring Lecture #19. Distributed transactions Availability Replicated State Machines spring 2018 Katrina LaCurts

Causal Consistency. CS 240: Computing Systems and Concurrency Lecture 16. Marco Canini

Flat Datacenter Storage. Edmund B. Nightingale, Jeremy Elson, et al. 6.S897

Fault Tolerance. Basic Concepts

Time. COS 418: Distributed Systems Lecture 3. Wyatt Lloyd

Distributed Systems 8L for Part IB

Lecture XIII: Replication-II

Distributed Systems. replication Johan Montelius ID2201. Distributed Systems ID2201

Distributed Systems. 27. Engineering Distributed Systems. Paul Krzyzanowski. Rutgers University. Fall 2018

Recap. CSE 486/586 Distributed Systems Google Chubby Lock Service. Paxos Phase 2. Paxos Phase 1. Google Chubby. Paxos Phase 3 C 1

Dynamic Reconfiguration of Primary/Backup Clusters

Strong Consistency & CAP Theorem

Synchronization. Clock Synchronization

Intuitive distributed algorithms. with F#

GFS: The Google File System

Time Synchronization and Logical Clocks

Last Class:Consistency Semantics. Today: More on Consistency

CS6450: Distributed Systems Lecture 13. Ryan Stutsman

Basic concepts in fault tolerance Masking failure by redundancy Process resilience Reliable communication. Distributed commit.

Failure Models. Fault Tolerance. Failure Masking by Redundancy. Agreement in Faulty Systems

Distributed Systems. 09. State Machine Replication & Virtual Synchrony. Paul Krzyzanowski. Rutgers University. Fall Paul Krzyzanowski

! Design constraints. " Component failures are the norm. " Files are huge by traditional standards. ! POSIX-like

Practical Byzantine Fault Tolerance. Castro and Liskov SOSP 99

Lecture XII: Replication

Today CSCI Coda. Naming: Volumes. Coda GFS PAST. Instructor: Abhishek Chandra. Main Goals: Volume is a subtree in the naming space

Distributed Systems

Proseminar Distributed Systems Summer Semester Paxos algorithm. Stefan Resmerita

Distributed Systems. Lec 10: Distributed File Systems GFS. Slide acks: Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung

Parallel DBs. April 25, 2017

Transcription:

Primary-Backup Replication CS 240: Computing Systems and Concurrency Lecture 7 Marco Canini Credits: Michael Freedman and Kyle Jamieson developed much of the original material.

Simplified Fault Tolerance in MapReduce User Program (1) fork (1) fork (1) fork (2) assign map Master MapReduce used GFS, stateless workers, and worker split 0 clients themselves to achieve fault (6) tolerance write output split 1 worker (5) remote read file 0 split 2 (3) read (4) local write worker split 3 worker output file 1 split 4 (2) assign reduce worker Input files Map phase Intermediate files (on local disks) Reduce phase Output files 2

Limited Fault Tolerance in Totally- Ordered Multicast P1 $ P2 Stateful server replication for fault tolerance But no story for server replacement upon a server failure à no replication Today: Make stateful servers fault-tolerant? % 3

Plan 1. Introduction to Primary-Backup replication 2. Case study: VMWare s fault-tolerant virtual machine Upcoming Two-phase commit and Distributed Consensus protocols 4

Primary-Backup: Goals Mechanism: Replicate and separate servers Goal #1: Provide a highly reliable service Despite some server and network failures Continue operation after failure Goal #2: Servers should behave just like a single, more reliable server 5

State machine replication Any server is essentially a state machine Set of (key, value) pairs is state Operations transition between states Need an op to be executed on all replicas, or none at all i.e., we need distributed all-or-nothing atomicity If op is deterministic, replicas will end in same state Key assumption: Operations are deterministic We will relax this assumption later today 6

Primary-Backup (P-B) approach Nominate one server the primary, call the other the backup Clients send all operations (get, put) to current primary The primary orders clients operations Should be only one primary at a time Need to keep clients, primary, and backup in sync: who is primary and who is backup 7

Challenges Network and server failures Network partitions Within each network partition, near-perfect communication between servers Between network partitions, no communication between servers 8

Primary-Backup (P-B) approach Client S 2 (Backup) S 1 (Primary) 1. Primary logs the operation locally 2. Primary sends operation to backup and waits for ack Backup performs or just adds it to its log 3. Primary performs op and acks to the client After backup has applied the operation and ack ed 9

View server A view server decides who is primary, who is backup Clients and servers depend on view server Don t decide on their own (might not agree) Challenge in designing the view service: Only want one primary at a time Careful protocol design needed For now, assume view server never fails 10

Monitoring server liveness Each replica periodically pings the view server View server declares replica dead if it missed N pings in a row Considers the replica alive after a single ping Can a replica be alive but declared dead by view server? Yes, in the case of network failure or partition 11

The view server decides the current view View = (view #, primary server, backup server) Challenge: All parties make their own local decision of the current S 1 (Primary) view number Client (1, S 1, S 2 ) (2, S 2, ) (3, S 2, S 3 ) S 2 (Backup) (Primary) S 3 (Backup) (Idle) 12

Agreeing on the current view In general, any number of servers can ping view server Okay to have a view with a primary and no backup Want everyone to agree on the view number Include the view # in RPCs between all parties 13

Transitioning between views How to ensure new primary has up-to-date state? Only promote a previous backup i.e., don t make a previously-idle server primary Set liveness detection timeout > state transfer time How does view server know whether backup is up to date? View server sends view-change message to all Primary must ack new view once backup is up-to-date View server stays with current view until ack Even if primary has or appears to have failed 14

Split Brain View Server S 1 (1, S 1, S 2 ) (1, S 1, S 2 ) (2, S 2, ) S 2 (2, S 2, ) Client 15

Server S 2 in the old view View Server S 1 (1, S 1, S 2 ) (1, S 1, S 2 ) (2, S 2, ) (1, (2, S 12, S ) 2 ) S 2 (1, (2, S 12,, S ) 2 ) Client 16

Server S 2 in the new view View Server S 1 (1, S 1, S 2 ) (1, S 1, S 2 ) (2, S 2, ) (1, (2, S 12, S ) 2 ) S 2 (2, S 2, ) Client 17

State transfer via operation log How does a new backup get the current state? If S 2 is backup in view i but was not in view i 1 S 2 asks primary to transfer the state One alternative: transfer the entire operation log Simple, but inefficient (operation log is long) 18

State transfer via snapshot Every op must be either before or after state transfer If op before transfer, transfer must reflect op If op after transfer, primary forwards the op to the backup after the state transfer finishes If each client has only one RPC outstanding at a time, state = map + result of the last RPC from each client (Had to save this anyway for at most once RPC) 19

Summary of rules 1. View i s primary must have been primary/backup in view i 1 2. A non-backup must reject forwarded requests Backup accepts forwarded requests only if they are in its idea of the current view 3. A non-primary must reject direct client requests 4. Every operation must be before or after state transfer 20

Primary-Backup: Summary First step in our goal of making stateful replicas fault-tolerant Allows replicas to provide continuous service despite persistent net and machine failures Finds repeated application in practical systems (next) 21

Plan 1. Introduction to Primary-Backup replication 2. Case study: VMWare s fault-tolerant virtual machine Scales et al., SIGOPS Operating Systems Review 44(4), Dec. 2010 (PDF) Upcoming Two-phase commit and Distributed Consensus protocols 22

VMware vsphere Fault Tolerance (VM-FT) Goals: 1. Replication of the whole virtual machine 2. Completely transparent to applications and clients 3. High availability for any existing software 23

Overview Two virtual machines (primary, backup) on different bare metal Primary VM Backup VM Logging channel runs over network Logging channel Fiber channel-attached shared disk Shared Disk 24

Virtual Machine I/O VM inputs Incoming network packets Disk reads Keyboard and mouse events Clock timer interrupt events VM outputs Outgoing network packets Disk writes 25

Overview Primary sends inputs to backup Primary VM Backup VM Backup outputs dropped Primary-backup heartbeats If primary fails, backup takes over Logging channel Shared Disk 26

VM-FT: Challenges 1. Making the backup an exact replica of primary 2. Making the system behave like a single server 3. Avoiding two primaries (Split Brain) 27

Log-based VM replication Step 1: Hypervisor at the primary logs the causes of non-determinism: 1. Log results of input events Including current program counter value for each 2. Log results of non-deterministic instructions e.g. log result of timestamp counter read (RDTSC) 28

Log-based VM replication Step 2: Primary hypervisor sends log entries to backup hypervisor over the logging channel Backup hypervisor replays the log entries Stops backup VM at next input event or nondeterministic instruction Delivers same input as primary Delivers same non-deterministic instruction result as primary 29

VM-FT Challenges 1. Making the backup an exact replica of primary 2. Making the system behave like a single server FT Protocol 3. Avoiding two primaries (Split Brain) 30

Primary to backup failover When backup takes over, non-determinism will make it execute differently than primary would have done This is okay! Output requirement: When backup VM takes over, its execution is consistent with outputs the primary VM has already sent 31

The problem of inconsistency Input Output Primary Backup 32

FT protocol Primary logs each output operation Delays any output until Backup acknowledges it Input Primary Backup Duplicate output Can restart execution at an output event 33

VM-FT: Challenges 1. Making the backup an exact replica of primary 2. Making the system behave like a single server 3. Avoiding two primaries (Split Brain) Logging channel may break 34

Detecting and responding to failures Primary and backup each run UDP heartbeats, monitor logging traffic from their peer Before going live (backup) or finding new backup (primary), execute an atomic test-andset on a variable in shared storage If the replica finds variable already set, it aborts 35

VM-FT: Conclusion Challenging application of primary-backup replication Design for correctness and consistency of replicated VM outputs despite failures Performance results show generally high performance, low logging bandwidth overhead 36

Sunday topic: Two-Phase Commit 37