To do. Consensus and related problems. q Failure. q Raft

Similar documents
Failures, Elections, and Raft

Recall: Primary-Backup. State machine replication. Extend PB for high availability. Consensus 2. Mechanism: Replicate and separate servers

Extend PB for high availability. PB high availability via 2PC. Recall: Primary-Backup. Putting it all together for SMR:

Two phase commit protocol. Two phase commit protocol. Recall: Linearizability (Strong Consistency) Consensus

P2 Recitation. Raft: A Consensus Algorithm for Replicated Logs

Distributed Systems. Day 11: Replication [Part 3 Raft] To survive failures you need a raft

Consensus and related problems

Designing for Understandability: the Raft Consensus Algorithm. Diego Ongaro John Ousterhout Stanford University

Paxos and Raft (Lecture 21, cs262a) Ion Stoica, UC Berkeley November 7, 2016

In Search of an Understandable Consensus Algorithm

Byzantine Fault Tolerant Raft

CSE 5306 Distributed Systems. Fault Tolerance

CSE 124: REPLICATED STATE MACHINES. George Porter November 8 and 10, 2017

REPLICATED STATE MACHINES

Today: Fault Tolerance

Distributed Consensus Protocols

Fault Tolerance. Distributed Systems IT332

Today: Fault Tolerance. Fault Tolerance

CSE 5306 Distributed Systems

Consensus in Distributed Systems. Jeff Chase Duke University

Distributed Systems 11. Consensus. Paul Krzyzanowski

Consensus Problem. Pradipta De

Recall our 2PC commit problem. Recall our 2PC commit problem. Doing failover correctly isn t easy. Consensus I. FLP Impossibility, Paxos

Fault Tolerance Part I. CS403/534 Distributed Systems Erkay Savas Sabanci University

Distributed systems. Lecture 6: distributed transactions, elections, consensus and replication. Malte Schwarzkopf

Fault Tolerance. Basic Concepts

Fault Tolerance. Distributed Software Systems. Definitions

Raft and Paxos Exam Rubric

Distributed Systems. 10. Consensus: Paxos. Paul Krzyzanowski. Rutgers University. Fall 2017

Fault Tolerance. Distributed Systems. September 2002

Distributed Commit in Asynchronous Systems

CprE Fault Tolerance. Dr. Yong Guan. Department of Electrical and Computer Engineering & Information Assurance Center Iowa State University

Distributed Systems (ICE 601) Fault Tolerance

CS /15/16. Paul Krzyzanowski 1. Question 1. Distributed Systems 2016 Exam 2 Review. Question 3. Question 2. Question 5.

Consensus, impossibility results and Paxos. Ken Birman

FAULT TOLERANT LEADER ELECTION IN DISTRIBUTED SYSTEMS

Distributed Systems Consensus

Today: Fault Tolerance. Replica Management

Coordination 1. To do. Mutual exclusion Election algorithms Next time: Global state. q q q

Distributed Systems Fault Tolerance

Paxos provides a highly available, redundant log of events

Paxos and Replication. Dan Ports, CSEP 552

Failure Tolerance. Distributed Systems Santa Clara University

Last time. Distributed systems Lecture 6: Elections, distributed transactions, and replication. DrRobert N. M. Watson

Distributed Consensus: Making Impossible Possible

Transactions. CS 475, Spring 2018 Concurrent & Distributed Systems

A Reliable Broadcast System

Agreement and Consensus. SWE 622, Spring 2017 Distributed Software Engineering

hraft: An Implementation of Raft in Haskell

Distributed Systems COMP 212. Lecture 19 Othon Michail

Consensus a classic problem. Consensus, impossibility results and Paxos. Distributed Consensus. Asynchronous networks.

Distributed Consensus: Making Impossible Possible

Byzantine Techniques

Recap. CSE 486/586 Distributed Systems Paxos. Paxos. Brief History. Brief History. Brief History C 1

Agreement in Distributed Systems CS 188 Distributed Systems February 19, 2015

Practical Byzantine Fault

Data Consistency and Blockchain. Bei Chun Zhou (BlockChainZ)

Verteilte Systeme/Distributed Systems Ch. 5: Various distributed algorithms

WA 2. Paxos and distributed programming Martin Klíma

Concepts. Techniques for masking faults. Failure Masking by Redundancy. CIS 505: Software Systems Lecture Note on Consensus

Introduction to Distributed Systems Seif Haridi

Parallel Data Types of Parallelism Replication (Multiple copies of the same data) Better throughput for read-only computations Data safety Partitionin

Distributed Systems Principles and Paradigms. Chapter 08: Fault Tolerance

Chapter 8 Fault Tolerance

Module 8 Fault Tolerance CS655! 8-1!

Intuitive distributed algorithms. with F#

Distributed Systems. Fault Tolerance. Paul Krzyzanowski

BYZANTINE GENERALS BYZANTINE GENERALS (1) A fable: Michał Szychowiak, 2002 Dependability of Distributed Systems (Byzantine agreement)

MODELS OF DISTRIBUTED SYSTEMS

Recovering from a Crash. Three-Phase Commit

EMPIRICAL STUDY OF UNSTABLE LEADERS IN PAXOS LONG KAI THESIS

Today: Fault Tolerance. Failure Masking by Redundancy

Distributed Systems. Multicast and Agreement

Practical Byzantine Fault Tolerance. Castro and Liskov SOSP 99

Module 8 - Fault Tolerance

ZooKeeper & Curator. CS 475, Spring 2018 Concurrent & Distributed Systems

Lecture XII: Replication

Dep. Systems Requirements

Chapter 8 Fault Tolerance

Technical Report. ARC: Analysis of Raft Consensus. Heidi Howard. Number 857. July Computer Laboratory UCAM-CL-TR-857 ISSN

FAULT TOLERANCE. Fault Tolerant Systems. Faults Faults (cont d)

Basic concepts in fault tolerance Masking failure by redundancy Process resilience Reliable communication. Distributed commit.

PRIMARY-BACKUP REPLICATION

MODELS OF DISTRIBUTED SYSTEMS

CS5412: CONSENSUS AND THE FLP IMPOSSIBILITY RESULT

Global atomicity. Such distributed atomicity is called global atomicity A protocol designed to enforce global atomicity is called commit protocol

Failure Models. Fault Tolerance. Failure Masking by Redundancy. Agreement in Faulty Systems

Linearizability CMPT 401. Sequential Consistency. Passive Replication

Clock Synchronization. Synchronization. Clock Synchronization Algorithms. Physical Clock Synchronization. Tanenbaum Chapter 6 plus additional papers

Distributed Systems (5DV147)

The challenges of non-stable predicates. The challenges of non-stable predicates. The challenges of non-stable predicates

Distributed Storage Systems: Data Replication using Quorums

The objective. Atomic Commit. The setup. Model. Preserve data consistency for distributed transactions in the presence of failures

Distributed Systems Principles and Paradigms

Development of a cluster of LXC containers

Consensus on Transaction Commit

Three modifications for the Raft consensus algorithm

Chapter 5: Distributed Systems: Fault Tolerance. Fall 2013 Jussi Kangasharju

Distributed Systems. replication Johan Montelius ID2201. Distributed Systems ID2201

Practical Byzantine Fault Tolerance. Miguel Castro and Barbara Liskov

Transcription:

Consensus and related problems To do q Failure q Consensus and related problems q Raft

Consensus We have seen protocols tailored for individual types of consensus/agreements Which process can enter the critical section Who is the leader What s the order of messages More general form of consensus; variations on the problem, depending on assumptions Synchronous or asynchronous system Fail-stop or Byzantine failures

Consensus and failures How to make process agree on a value after one or more have proposed what the value should be? Update : +$00 Updated :+% Replicated DB reach consensus even if some processes may fail Failure System cannot meet its promises Error Part of system s state that can lead to a failure Many ways to fail

Failure models Crash failures Simply halts, but behaves correctly before halting Fail-stop, crash failures and others can tell Omission failures fails to respond to incoming requests, data lost Timing failures Takes too long, > a specified real-time interval Response failures Output is incorrect Byzantine (arbitrary) failures Anything, arbitrary output, arbitrary timing failures 4

Failure è Redundancy Redundancy basic approach to masking faults Information Add extra bits to a message for reconstruction Time Do something more than once if needed Physical Add copies of software/hardware 5

Consensus Definition A group of N processes, to reach consensus Every process p i starts undecided, proposes value v i Processes exchange values Each sets the value of decision variable d i, entering the decided state v L v v v Requirements L L Termination All correct processes eventually decide Agreement If a correct process decides v, all correct processes decide the same Integrity If all correct processes proposed v, then any correct process in decided state has chosen v v v 6

Dreamland solution Just as illustration A system where processes cannot fail Each of N process p i R-multicasts its proposed value to g Every process p j Collects all N values (including its own) Evaluates a = majority(v i for i:.. N), returns a or NIL if no majority exists Could be something different than majority, e.g., min or max Requirements Termination guaranteed by the reliability of multicast Agreement and integrity by the definition of majority and the integrity property of reliable multicast (every process receives the same set of values and runs the same function, so they must agree on the same value) 7

Consensus and related problems Byzantine general problem + generals need to agree to attack or to retreat One, the commander issues an order; the others, the lieutenants decide what to do One of the generals (including the commander) may be treacherous Consensus but slightly different integrity (not all propose, just the commander) if commander is correct, all decide on the value proposed by commander C Attack Attack L L 8

Consensus and related problems Interactive consistency All processes agree on a vector of values, one per process Similar requirements, Termination: eventually each correct process sets its decision var Agreement: The decision vector for correct processes is the same Integrity: if p i is correct, all correct processes decide on v i as the i-th entry of its vector Consensus, Byzantine generals, Interactive consistency All can be define in the presence of crash or arbitrary failures and for synchronous or asynchronous systems It is possible to derive a solution to one problem using a solution for another 9

Back to Byzantine general problem If the commander is a traitor Can propose attack to one lieutenant and retreat to another If the lieutenant is a traitor, Can tell one lieutenant that the commander said attack and tell another that he said retreat Retreat C Attack L L Assumptions Synchronous system: Correct processes can detect absence of a message (timeout), but can t conclude the sender has crashed Arbitrary/Byzantine failures: A faulty process can send any message with any value at any time (or omit to send) The communication channel between processes are private 0

Byzantine general problem Impossibility with three processes If L has to decide to accept v on the first example, has to choose w in the second one Commander does the right thing but L is evil C p :v :v p C :w :x Commander is evil ::v ::w p L ::u L p p L ::x L p says says u Lamport et al. solution solves BGP for m+ or more generals in the presence of at most m traitors

Byzantine generals problem A solution Algorithm BG(0). The commander (C) sends value to every lieutenant. Lieutenants use the value received from C or RETREAT if they received no value Algorithm BG(m), m>0. C sends value to every lieutenant. For each i, let v i be the value Lieutenant i receives from C or RETREAT. Lieutenant i acts as C in algorithm BG(m-) to send the value v i to each of the n- other lieutenants. For each i, and each j!= i, let v j be the value Lieutenant i received from Lieutenant j in step () (using algorithm BG(m-)), or else RETREAT. Lieutenant i uses the value majority(v,, v n- )

A run with N 4 and f = A lieutenant is the traitor L: majority(v,v,x) = v L: majority(v,v,y) = v v L v C v L L v C x L L v v L v y The commander is the traitor L: majority(v,w,z) = NIL L: majority(v,w,z) = NIL L: majortiy(v,w,z) = NIL v L C w z L L v C z L L w v L w z

Consensus and related problems It is possible to derive a solution to one problem using a solution to another Consensus based on Interactive consistency, IC based on Byzantine general, E.g. suppose there s a solution to Byzantine generals BG i (j, v) returns decision value of p i in a run of the solution to Byzantine generals where commander p j, proposed value v IC i (v,v,,v N )[j] returns the jth value in decision vector of p i in a run of the solution to interactive consistency IC from BG by running BG N times, one per process IC i (v,v,,v N )[j] = BG i (j, v j ) (i, j = N) 4

Impossibility of consensus But what if the system is asynchronous? No algorithm can guarantee to reach consensus with even one faulty process (by crashing) Famous result from Fischer, Lynch, Paterson (FLP), 985 Basic idea: there s always a continuation of a processes execution that avoids reaching consensus (why? Can t tell if a process is running slow or is dead) Guarantee it is possible, just not guaranteed, so, yes, sometimes it may work How do we work around this? Masking faults: process keep data in persistent storage so that they can restart after crashing (so it just seems slow) Using perfect by design failure detectors processes agree to a maximum response time (otherwise, the process has failed) 5

Replicated state-machines Consensus typically appears in the context of replicated state machines State machines (SM) on a collection of servers A data-structure with deterministic operations replicated among servers State consistent if every server sees the same sequence of operations Common algorithms (non-byzantine) Paxos [Lamport], Viewstamped replication [Oki, Liskov], Raft [Ongaro, Ousterhout] 6

Raft * An algorithm for managing a replicated log Designed for understandability making Paxos easier Two general approaches to consensus Symmetric, leader-less All servers are equal Client contact any sever Asymmetric, leader-based - Raft At any given point in time, one in charge Clients communicate with the leader Assumes crash failures No dependency on times for safety Yes for availability *Partially based on the authors slides 7

Raft overview Relies on a distinguished leader With full responsibility for managing the replicated log Leader accepts log entries from clients Replicates them on other severs Tell servers when it is safe to apply log entries Raft decomposes the consensus problem Leader election Choose a new one when the existing one fails Log replication Accept log entries from clients and replicate Safety key safety property, SM safety If any sever has applied a particular log entry to its SM, no other sever may apply a different command for the same log index 8

Back in 5 9

Raft. Leader election. Normal operation. Safety and consistency 4. Neutralize old leaders 5. Client protocol 6. Configuration changes 0

Servers Leaders and followers A Raft cluster contains several servers At any given time, each server is either Leader handles client interaction, log replication, < Follower completely passive, only responds to incoming RPCs, doesn t issue RPCs itself Candidate leader wannabe Normal state leader, N- followers

Raft time Term Term Term Term 4 Term 5 Time Elections Split vote Normal operation Time split in terms (acting as logical clocks) Election + normal operation under a single leader At most one leader per term Some terms have no leader (failed election) Each server maintains the current term value Current terms are exchanged whenever servers communicate Key roles of terms: identify obsolete information

RPCs, heartbeats and timeouts Servers start as followers Followers receive RPCs from leaders or candidates Two RPCs overall, AppendEntries and RequestVote Both idempotent Leaders must send heartbeats (empty AppendEntries RPCs) to maintain authority If electiontimeout passes with no RPCs Follower assumes leader has crashed starts a new election, putting itself up as candidate Starts up Times out, start election follower candidate Discovers current leader or new term

Leader election Increment current term Become a candidate and vote for self Send RequestVote RPC to others, retrying until either. Receive vote from majority Becoming the leader; send AppendEntries heartbeats to all others. Receive RPC from valid leader Return to being a follower. No-one wins election, too many candidates (timeout) Increment term, start new election Starts up Times out, start election Times out, new election Gets majority of votes from servers follower candidate leader Discovers current leader or new term Discovers server with higher term 4

Election Safety and liveness Safety allow at most one winner per term Each server gives out only one vote per term Vote is made persistent on disk Thus, two different candidates can t get majority in same term Liveness some candidate must eventually win In principle, you could see repeated split votes Serves choose election timeouts randomly within [T, T] One server usually times out and wins election before other noticed the absence Works well election timeout, T >> broadcast time 5

Normal operation Clients sends command to the leader Leader appends commands to its log sends AppendEntries RPCs to followers Once new entry committed Leader passes command to its SM, return result to client... notifies followers of committed entries in subsequent AppendEntries RPCs Followers pass committed commands to their state machine Crashed/slow followers? Leader retries RPC Performace is optimal in common case One successful RPC to any majority of servers 6

Logs 4 5 6 7 8 Log index Leader Followers Committed entries Log structure: every log entry {index, term, command} Log stored on stable storage (disk) to survive crashes Entry committed if stored on majority of servers Committed entries have been applied to the SM Committed entries are durable, eventually executed by all SMs 7

Log consistency Raft maintains the following properties: If two entries in different logs have the same index and term, they store the same command If two entries in different logs have the same index and terms, the logs are identical in all preceding entries 4 5 6 4 sub 8

AppendEntries consistency check Each AppendEntries RPC contains index and term of entry in the log, immediately preceding new one Follower must contain matching entry, else reject request This implements an induction step that ensure coherency 4 5 4 5 Leader AppendEntries Followers AppendEntries succeeds AppendEntries fails 9

Safety requirement Once a log entry has been applied to a state machine, no other state machine must apply a different value for that log entry Raft safety property (a bit narrower) If a leader has decided that a log entry is committed, that entry will be present in the log of all future leaders This guarantees the safety requirement Leader don t overwrite or delete entries Only entries in the leader s log can be committed Entry must be committed before applying to state machine 0

Picking the best leader Can t tell which entries are committed Old leader knows but its log is unavailable during transition! 4 5 Committed? During election, choose candidate with log most likely to contain all committed entries Candidates include log info in RequestVote RPCs (index & term of last log entry) Voting server V denies vote if its log is more complete (lastterm v > lastterm c ) (lastterm v == lastterm c ) && (lastindex v > lastindex c ) Unavailable during transition Leader will have most complete log among electing majority

Committing entry from current term Case #/: Leader decides entry in current term is committed S 4 5 6 Leader in term S S AppendEntry just succeeded So entry 4 is committed S 4 S 5 Cant be elected as leader for term Safe: leader for term must contain entry 4

Committing entry from earlier term Case #/: Leader is trying to finish committing entry from an earlier term S 4 5 6 4 sub Leader in term 4 S S AppendEntry just succeeded S 4 S 5 Entry not safely committed: s 5 can be elected as leader for term 5 If elected, it will overwrite entry on s, s, and s! Need a new rule for commitment

New commitment rules For a leader to decide an entry is committed: Must be stored on a majority of servers At least one new entry from the leader s term must also be stored on majority of servers Once entry 4 committed: s 5 cannot be elected leader for term 5 Entries and 4 both safe S S S 4 5 4 4 4 Leader in term 4 S 4 S 5 Combination of election rules and commitment rules makes Raft safe 4

Leader changes Leader crashes can leave the log inconsistent Inconsistencies can be compound over a series of leader and follower crashes Missing and extraneous entries in a log may span multiple terms Raft handling of inconsistencies No special steps for new leader Leader s log is the truth Will eventually makes follower s log identical to its own 5

Log inconsistencies Leader changes can result in log inconsistencies Leader for term 8 4 5 6 7 8 9 0 4 sub 4 sub 5 5 6 6 6 4 sub 4 sub 5 5 6 6 Missing entries 4 sub Possible followers 4 sub 4 sub 4 sub 4 sub 5 5 5 5 6 6 6 6 6 6 6 7 7 4 sub 4 sub 4 sub 4 sub Extraneous entries 6

Repairing follower logs Leader New leader makes follower logs consistent with its own Delete extraneous entries, fill in missing entries Leader keeps nextindex for each follower: Index of next log entry to send to that follower Initialized to ( + leader s last index) When AppendEntries consistency check fails, decrement nextindex and try again When follower overwrites inconsistent entry, it deletes all subsequent entries netindex 4 5 6 7 8 9 0 4 sub 4 sub 5 5 6 6 6 Follower log before And after 4 sub 7

Neutralizing old leaders Deposed leader may not be dead Temporarily disconnected from network Other servers elect a new leader Old leader reconnects and attempts to commit log entries Terms used to detect stale leaders (and candidates) Every RPC contains term of sender If sender s term is older, RPC is rejected, sender reverts to follower and updates its term If receiver s term is older, it reverts to follower, updates its term, then processes RPC normally (as a good follower) Election process updates terms of majority of servers Candidate includes its own term in its request, everybody updates So after election, deposed server cannot commit new log entries 8

Client protocol Send commands to leader If leader unknown, contact any server If contacted server not leader, it will redirect to leader Leader does not respond until command has been logged, committed, and executed by leader s state machine If request times out (e.g., leader crash): Client reissues command to some other server Eventually redirected to new leader Retry request with new leader 9

Client protocol What if leader crashes after executing command, but before responding? Must not execute command twice Solution: client embeds a unique id in each command Server includes id in log entry Before accepting command, leader checks its log for an entry with that id If id is in log, ignore new command, return response from old command Result: exactly-once semantics as long as client doesn t crash 40

Finally, configuration changes Until now, system configuration was considered fixed Determines what constitutes a majority Consensus mech must support configuration changes Replace failed machine Change degree of replication Cannot switch directly from one configuration to another: conflicting majorities could arise C old C new Server Server Server Server 4 Majority of C old Two disjoint majorities Majority of C new Server 5 time 4

Two-phase change Joint consensus Intermediate phase uses joint consensus Need majority of old and new config for elections, commitment Config change is just a log entry; applied immediately on receipt (committed or not) Once joint consensus is committed, begin replicating log entry for final configuration C old can make unilateral decisions C new can make unilateral decisions C new C old+new C old C old+new entry committed C new entry committed time 4

Joint consensus, itional details Any server from either configuration can serve as leader If current leader is not in C new, must step down once C new is committed C old can make unilateral decisions C new can make unilateral decisions C new C old C old+new leader not in C new steps down here C old+new entry committed C new entry committed time 4

Summary How to make process agree on a value after one or more have proposed what the value should be? Consensus In synchronous and asynchronous systems Not just theory Chubby, Zookeeper, Several Raft implemenations out there (and in Go) A good demo of Raft http://thesecretlivesofdata.com/raft/ 44