Byzantine Fault Tolerance

Similar documents
Byzantine Fault Tolerance

Byzantine fault tolerance. Jinyang Li With PBFT slides from Liskov

Failure models. Byzantine Fault Tolerance. What can go wrong? Paxos is fail-stop tolerant. BFT model. BFT replication 5/25/18

A definition. Byzantine Generals Problem. Synchronous, Byzantine world

Practical Byzantine Fault Tolerance. Miguel Castro and Barbara Liskov

Proactive Recovery in a Byzantine-Fault-Tolerant System

Proactive Recovery in a Byzantine-Fault-Tolerant System

Practical Byzantine Fault Tolerance. Castro and Liskov SOSP 99

Security (and finale) Dan Ports, CSEP 552

Practical Byzantine Fault

Primary-Backup Replication

Today: Fault Tolerance

Practical Byzantine Fault Tolerance (The Byzantine Generals Problem)

Today: Fault Tolerance. Fault Tolerance

Practical Byzantine Fault Tolerance and Proactive Recovery

Practical Byzantine Fault Tolerance

Recovering from a Crash. Three-Phase Commit

Byzantine Techniques

Today: Fault Tolerance. Replica Management

Causal Consistency and Two-Phase Commit

CS 138: Practical Byzantine Consensus. CS 138 XX 1 Copyright 2017 Thomas W. Doeppner. All rights reserved.

Consensus and related problems

Byzantine Fault Tolerant Raft

or? Paxos: Fun Facts Quorum Quorum: Primary Copy vs. Majority Quorum: Primary Copy vs. Majority

Agreement in Distributed Systems CS 188 Distributed Systems February 19, 2015

Today: Fault Tolerance. Failure Masking by Redundancy

Byzantine Fault Tolerance and Consensus. Adi Seredinschi Distributed Programming Laboratory

Practical Byzantine Fault Tolerance

Authenticated Byzantine Fault Tolerance Without Public-Key Cryptography

Distributed Systems 11. Consensus. Paul Krzyzanowski

Practical Byzantine Fault Tolerance

CSE 5306 Distributed Systems

CSE 5306 Distributed Systems. Fault Tolerance

CS5412: CONSENSUS AND THE FLP IMPOSSIBILITY RESULT

PBFT: A Byzantine Renaissance. The Setup. What could possibly go wrong? The General Idea. Practical Byzantine Fault-Tolerance (CL99, CL00)

Key-value store with eventual consistency without trusting individual nodes

Paxos provides a highly available, redundant log of events

Recall: Primary-Backup. State machine replication. Extend PB for high availability. Consensus 2. Mechanism: Replicate and separate servers

Parallel Data Types of Parallelism Replication (Multiple copies of the same data) Better throughput for read-only computations Data safety Partitionin

Distributed Systems Consensus

Concurrency Control II and Distributed Transactions

PRIMARY-BACKUP REPLICATION

Distributed Systems. Before We Begin. Advantages. What is a Distributed System? CSE 120: Principles of Operating Systems. Lecture 13.

FAULT TOLERANCE. Fault Tolerant Systems. Faults Faults (cont d)

Viewstamped Replication to Practical Byzantine Fault Tolerance. Pradipta De

Fault Tolerance. Basic Concepts

Chapter 8 Fault Tolerance

Distributed Systems. Fault Tolerance. Paul Krzyzanowski

Recall our 2PC commit problem. Recall our 2PC commit problem. Doing failover correctly isn t easy. Consensus I. FLP Impossibility, Paxos

Primary/Backup. CS6450: Distributed Systems Lecture 3/4. Ryan Stutsman

Practical Byzantine Fault Tolerance Consensus and A Simple Distributed Ledger Application Hao Xu Muyun Chen Xin Li

Data Consistency and Blockchain. Bei Chun Zhou (BlockChainZ)

CS6450: Distributed Systems Lecture 13. Ryan Stutsman

Dep. Systems Requirements

Global atomicity. Such distributed atomicity is called global atomicity A protocol designed to enforce global atomicity is called commit protocol

Fault Tolerance Part I. CS403/534 Distributed Systems Erkay Savas Sabanci University

Time Synchronization and Logical Clocks

The transaction. Defining properties of transactions. Failures in complex systems propagate. Concurrency Control, Locking, and Recovery

Distributed Systems. Lec 9: Distributed File Systems NFS, AFS. Slide acks: Dave Andersen

Robust BFT Protocols

Last Class:Consistency Semantics. Today: More on Consistency

Distributed Systems Fault Tolerance

Implementation Issues. Remote-Write Protocols

Distributed Systems COMP 212. Lecture 19 Othon Michail

Paxos and Raft (Lecture 21, cs262a) Ion Stoica, UC Berkeley November 7, 2016

Fault Tolerance. Distributed Systems. September 2002

Replication. Feb 10, 2016 CPSC 416

Reducing the Costs of Large-Scale BFT Replication

Horizontal or vertical scalability? Horizontal scaling is challenging. Today. Scaling Out Key-Value Storage

Extend PB for high availability. PB high availability via 2PC. Recall: Primary-Backup. Putting it all together for SMR:

arxiv: v2 [cs.dc] 12 Sep 2017

CprE Fault Tolerance. Dr. Yong Guan. Department of Electrical and Computer Engineering & Information Assurance Center Iowa State University

Transactions. CS 475, Spring 2018 Concurrent & Distributed Systems

Basic concepts in fault tolerance Masking failure by redundancy Process resilience Reliable communication. Distributed commit.

The Long March of BFT. Weird Things Happen in Distributed Systems. A specter is haunting the system s community... A hierarchy of failure models

Availability, Reliability, and Fault Tolerance

Paxos and Replication. Dan Ports, CSEP 552

Two phase commit protocol. Two phase commit protocol. Recall: Linearizability (Strong Consistency) Consensus

To do. Consensus and related problems. q Failure. q Raft

Availability versus consistency. Eventual Consistency: Bayou. Eventual consistency. Bayou: A Weakly Connected Replicated Storage System

AS distributed systems develop and grow in size,

Consensus Problem. Pradipta De

15-440/15-640: Homework 4 Due: December 4, :59pm

Distributed Systems. replication Johan Montelius ID2201. Distributed Systems ID2201

Distributed systems. Lecture 6: distributed transactions, elections, consensus and replication. Malte Schwarzkopf

Distributed Operating Systems

Raft and Paxos Exam Rubric

Steward: Scaling Byzantine Fault-Tolerant Systems to Wide Area Networks

Failures, Elections, and Raft

Fault Tolerance. it continues to perform its function in the event of a failure example: a system with redundant components

A Correctness Proof for a Practical Byzantine-Fault-Tolerant Replication Algorithm

WA 2. Paxos and distributed programming Martin Klíma

TWO-PHASE COMMIT ATTRIBUTION 5/11/2018. George Porter May 9 and 11, 2018

Eventual Consistency. Eventual Consistency

BYZANTINE GENERALS BYZANTINE GENERALS (1) A fable: Michał Szychowiak, 2002 Dependability of Distributed Systems (Byzantine agreement)

Fault Tolerance. Distributed Systems IT332

Large-Scale Key-Value Stores Eventual Consistency Marco Serafini

Advantages of P2P systems. P2P Caching and Archiving. Squirrel. Papers to Discuss. Why bother? Implementation

Distributed Algorithms Practical Byzantine Fault Tolerance

BYZANTINE FAULT TOLERANCE FOR NONDETERMINISTIC APPLICATIONS BO CHEN

Transcription:

Byzantine Fault Tolerance CS 240: Computing Systems and Concurrency Lecture 11 Marco Canini Credits: Michael Freedman and Kyle Jamieson developed much of the original material.

So far: Fail-stop failures Traditional state machine replication tolerates fail-stop failures: Node crashes Network breaks or partitions State machine replication with N = 2f+1 replicas can tolerate f simultaneous fail-stop failures Two algorithms: Paxos, RAFT

Byzantine faults Byzantine fault: Node/component fails arbitrarily Might perform incorrect computation Might give conflicting information to different parts of the system Might collude with other failed nodes Why might nodes or components fail arbitrarily? Software bug present in code Hardware failure occurs Hack attack on system

Today: Byzantine fault tolerance Can we provide state machine replication for a service in the presence of Byzantine faults? Such a service is called a Byzantine Fault Tolerant (BFT) service Why might we care about this level of reliability? 4

Mini-case-study: Boeing 777 fly-by-wire primary flight control system Triple-redundant, dissimilar processor hardware: 1. Intel 80486 2. Motorola 3. AMD Key techniques: Each processor Hardware runs and code software diversity from a different Voting compiler between components Simplified design: Pilot inputs à three processors Processors vote à control surface 5

Today 1. Traditional state-machine replication for BFT? 2. Practical BFT replication algorithm 3. Performance and Discussion 6

Review: Tolerating one fail-stop failure Traditional state machine replication (Paxos) requires, e.g., 2f + 1 = three replicas, if f = 1 Operations are totally ordered à correctness A two-phase protocol Each operation uses f + 1 = 2 of them Overlapping quorums So at least one replica remembers 7

Use Paxos for BFT? 1. Can t rely on the primary to assign seqno Could assign same seqno to different requests 2. Can t use Paxos for view change Under Byzantine faults, the intersection of two majority (f + 1 node) quorums may be bad node Bad node tells different quorums different things! e.g. tells N0 accept val1, but N1 accept val2

Paxos under Byzantine faults (f = 1) N2 OK Prepare(N0:1) N0 n h =N0:1 OK(val=null) N1 n h =N0:1

Paxos under Byzantine faults (f = 1) N2 f +1 OK Accept(N0:1, val=xyz) N0 n h =N0:1 Decide xyz N1 n h =N0:1

Paxos under Byzantine faults (f = 1) N2 N0 n h =N0:1 Decide xyz N1 n h =N2:1

Paxos under Byzantine faults (f = 1) N2 f +1 N0 n h =N0:1 Decide xyz Decide abc N1 n h =N2:1 Conflicting decisions!

Back to theoretical fundamentals: Byzantine generals Generals camped outside a city, waiting to attack Must agree on common battle plan Attack or wait together à success However, one or more of them may be traitors who will try to confuse the others Problem: Using messengers, Find an algorithm problem to solvable ensure if loyal and only if generals more than agree two-thirds on planof the generals are loyal 13

Put burden on client instead? Clients sign input data before storing it, then verify signatures on data retrieved from service Example: Store signed file f1= aaa with server Verify that returned f1 is correctly signed But a Byzantine node can replay stale, signed data in its response Inefficient: Clients have to perform computations and sign data

Today 1. Traditional state-machine replication for BFT? 2. Practical BFT replication algorithm [Liskov & Castro, 2001] 3. Performance and Discussion 15

Practical BFT: Overview Uses 3f+1 replicas to survive f failures Shown to be minimal (Lamport) Requires three phases (not two) Provides state machine replication Arbitrary service accessed by operations, e.g., File system ops read and write files and directories Tolerates Byzantine-faulty clients 16

Correctness argument Assume Operations are deterministic Replicas start in same state Then if replicas execute the same requests in the same order: Correct replicas will produce identical results 17

Non-problem: Client failures Clients can t cause internal inconsistencies to the data in the servers State machine replication property Make sure clients don t stop halfway through and leave the system in a bad state Clients can write bogus data to the system System should authenticate clients and separate their data just like any other datastore This is a separate problem 18

What clients do 1. Send requests to the primary replica 2. Wait for f+1 identical replies Note: The replies may be deceptive i.e. replica returns correct answer, but locally does otherwise! But one reply is actually from a non-faulty replica Client 3f+1 replicas 19

What replicas do Carry out a protocol that ensures that Replies from honest replicas are correct Enough replicas process each request to ensure that The non-faulty replicas process the same requests In the same order Non-faulty replicas obey the protocol 20

Primary-Backup protocol Primary-Backup protocol: Group runs in a view View number designates the primary replica Client Primary Backups View Primary is the node whose id (modulo view #) = 1 21

Ordering requests Primary picks the ordering of requests But the primary might be a liar! Client Primary Backups View Backups ensure primary behaves correctly Check and certify correct ordering Trigger view changes to replace faulty primary 22

Byzantine quorums (f = 1) A Byzantine quorum contains 2f+1 replicas One op s quorum overlaps with next op s quorum There are 3f+1 replicas, in total So overlap is f+1 replicas f+1 replicas must contain 1 non-faulty replica 23

Quorum certificates A Byzantine quorum contains 2f+1 replicas Quorum certificate: a collection of 2f + 1 signed, identical messages from a Byzantine quorum All messages agree on the same statement 24

Keys Each client and replica has a private-public keypair Secret keys: symmetric cryptography Key is known only to the two communicating parties Bootstrapped using the public keys Each client, replica has the following secret keys: One key per replica for sending messages One key per replica for receiving messages 25

Ordering requests request: m Signed,Client Let seq(m)=n Signed, Primary Primary Backup 1 Backup 2 Primary could be lying, sending a different message to each backup! Backup 3 Primary chooses the request s sequence number (n) Sequence number determines order of execution 26

Checking the primary s message request: m Signed,Client Primary Backup 1 Let seq(m)=n Signed, Primary I accept seq(m)=n Signed, Backup 1 I accept seq(m)=n Signed, Backup 2 Backup 2 Backup 3 Backups locally verify they ve seen one client request for sequence number n If local check passes, replica broadcasts accept message Each replica makes this decision independently 27

Collecting a prepared certificate (f = 1) request: m Signed,Client Primary Backup 1 Backup 2 Let seq(m)=n Signed, Primary P I accept seq(m)=n Signed, Backup 1 P I accept seq(m)=n Signed, Backup 2 P Backup 3 Backups wait to collect prepared quorum certificate Each correct node has a prepared certificate locally, Message but does is prepared not know (P) whether at a replica the other when correct it has: A message from the primary proposing the seqno nodes do too! So, we can t commit yet! 2f messages from itself and others accepting the seqno 28

Collecting a committed certificate (f = 1) request: m Let seq(m)=n Primary Backup 1 Backup 2 P P accept P Have cert for seq(m)=n Signed, Primary C Signed, Backup 1 C Signed, Backup 2 C Backup 3 Prepared replicas announce: they know a quorum accepts Once the request is committed, replicas execute the operation and send a reply directly back to the client. Replicas wait for a committed quorum certificate C: 2f+1 different statements that a replica is prepared 29

Byzantine primary (f = 1) request: m Primary Backup 1 Let seq(m)=n Let seq(m )=n accept m Backup 2 Backup 3 Let seq(m )=n accept m Recall: To prepare, need primary message and 2f accepts No Backup one has 1: Has accumulated primary message enough for m, accepts messages for m to Backups prepare 2, 3: Have à time primary for message a view + one change matching accept 30

Byzantine primary In general, backups won t prepare if primary lies Suppose they did: two distinct requests m and m for the same sequence number n Then prepared quorum certificates (each of size 2f+1) would intersect at an honest replica So that honest replica would have sent an accept message for both m and m So m = m 31

View change Client Primary Backups View If a replica suspects the primary is faulty, it requests a view change Sends a viewchange request to all replicas Everyone acks the view change request New primary collects a quorum (2f+1) of responses Sends a new-view message with this certificate

Considerations for view change Need committed operations to survive into next view Client may have gotten answer Need to preserve liveness If replicas are too fast to do view change, but really primary is okay then performance problem Or malicious replica tries to subvert the system by proposing a bogus view change 33

Garbage collection Storing all messages and certificates into a log Can t let log grow without bound Protocol to shrink the log when it gets too big Discard messages, certificates on commit? No! Need them for view change Replicas have to agree to shrink the log 34

Proactive recovery What we ve done so far: good service provided there are no more than f failures over system lifetime But cannot recognize faulty replicas! Therefore proactive recovery: Recover the replica to a known good state whether faulty or not Correct service provided no more than f failures in a small time window e.g., 10 minutes 35

Recovery protocol sketch Watchdog timer Secure co-processor Stores node s private key (of private-public keypair) Read-only memory Restart node periodically: Saves its state (timed operation) Reboot, reload code from read-only memory Discard all secret keys (prevent impersonation) Establishes new secret keys and state 36

Today 1. Traditional state-machine replication for BFT? 2. Practical BFT replication algorithm [Liskov & Castro, 2001] 3. Performance and Discussion 37

File system benchmarks BFS filesystem runs atop BFT Four replicas tolerating one Byzantine failure Modified Andrew filesystem benchmark What s performance relative to NFS? Compare BFS versus Linux NFSv2 (unsafe!) BFS 15% slower: claim can be used in practice 38

Practical limitations of BFT Protection is achieved only when at most f nodes fail Is one node more or less secure than four? Need independent implementations of the service Needs more messages, rounds than conventional state machine replication Does not prevent many classes of attacks: Turn a machine into a botnet node Steal data from servers

Large impact Inspired much follow-on work to address its limitations The ideas surrounding Byzantine fault tolerance have found numerous applications: Boeing 777 and 787 flight control computer systems Digital currency systems 40

Sunday topic: Peer-to-Peer Systems and Distributed Hash Tables 41