Today: Fault Tolerance. Replica Management

Similar documents
Today: Fault Tolerance. Fault Tolerance

Today: Fault Tolerance. Failure Masking by Redundancy

Today: Fault Tolerance

Today: Fault Tolerance. Reliable One-One Communication

Basic concepts in fault tolerance Masking failure by redundancy Process resilience Reliable communication. Distributed commit.

Implementation Issues. Remote-Write Protocols

Fault Tolerance. Distributed Systems. September 2002

Last Class:Consistency Semantics. Today: More on Consistency

Eventual Consistency. Eventual Consistency

CprE Fault Tolerance. Dr. Yong Guan. Department of Electrical and Computer Engineering & Information Assurance Center Iowa State University

Fault Tolerance. Distributed Systems IT332

Fault Tolerance. Basic Concepts

Fault Tolerance. Chapter 7

CSE 5306 Distributed Systems

Dep. Systems Requirements

CSE 5306 Distributed Systems. Fault Tolerance

Distributed Systems COMP 212. Lecture 19 Othon Michail

Recovering from a Crash. Three-Phase Commit

Distributed Systems Fault Tolerance

MYE017 Distributed Systems. Kostas Magoutis

Fault Tolerance. Distributed Software Systems. Definitions

Fault Tolerance. Fall 2008 Jussi Kangasharju

Chapter 5: Distributed Systems: Fault Tolerance. Fall 2013 Jussi Kangasharju

Fault Tolerance Part II. CS403/534 Distributed Systems Erkay Savas Sabanci University

Failure Tolerance. Distributed Systems Santa Clara University

Fault Tolerance Part I. CS403/534 Distributed Systems Erkay Savas Sabanci University

Fault Tolerance 1/64

MYE017 Distributed Systems. Kostas Magoutis

Chapter 8 Fault Tolerance

Failure Models. Fault Tolerance. Failure Masking by Redundancy. Agreement in Faulty Systems

Chapter 8 Fault Tolerance

Distributed Systems

Module 8 Fault Tolerance CS655! 8-1!

Fault Tolerance. o Basic Concepts o Process Resilience o Reliable Client-Server Communication o Reliable Group Communication. o Distributed Commit

Distributed Systems. Fault Tolerance. Paul Krzyzanowski

Distributed Systems Principles and Paradigms. Chapter 08: Fault Tolerance

FAULT TOLERANCE. Fault Tolerant Systems. Faults Faults (cont d)

Module 8 - Fault Tolerance

Distributed Systems Principles and Paradigms


Consensus and related problems

Problem: if one process cannot perform its operation, it cannot notify the. Thus in practise better schemes are needed.

Fault Tolerance Dealing with an imperfect world

Distributed Systems. 09. State Machine Replication & Virtual Synchrony. Paul Krzyzanowski. Rutgers University. Fall Paul Krzyzanowski

Fault Tolerance. it continues to perform its function in the event of a failure example: a system with redundant components

(Pessimistic) Timestamp Ordering

Distributed systems. Lecture 6: distributed transactions, elections, consensus and replication. Malte Schwarzkopf

Distributed Systems 11. Consensus. Paul Krzyzanowski

(Pessimistic) Timestamp Ordering. Rules for read and write Operations. Read Operations and Timestamps. Write Operations and Timestamps

Parallel and Distributed Systems. Programming Models. Why Parallel or Distributed Computing? What is a parallel computer?

Distributed Systems (ICE 601) Fault Tolerance

Distributed Operating Systems

Last time. Distributed systems Lecture 6: Elections, distributed transactions, and replication. DrRobert N. M. Watson

Distributed Systems Principles and Paradigms

Distributed Systems Principles and Paradigms

To do. Consensus and related problems. q Failure. q Raft

Today CSCI Recovery techniques. Recovery. Recovery CAP Theorem. Instructor: Abhishek Chandra

Distributed System. Gang Wu. Spring,2018

Clock and Time. THOAI NAM Faculty of Information Technology HCMC University of Technology

Distributed Systems COMP 212. Revision 2 Othon Michail

Distributed Systems. Characteristics of Distributed Systems. Lecture Notes 1 Basic Concepts. Operating Systems. Anand Tripathi

Distributed Systems. Characteristics of Distributed Systems. Characteristics of Distributed Systems. Goals in Distributed System Designs

Byzantine Fault Tolerance

Distributed Computing. CS439: Principles of Computer Systems November 20, 2017

Distributed Systems. 19. Fault Tolerance Paul Krzyzanowski. Rutgers University. Fall 2013

Practical Byzantine Fault

PRIMARY-BACKUP REPLICATION

CS /15/16. Paul Krzyzanowski 1. Question 1. Distributed Systems 2016 Exam 2 Review. Question 3. Question 2. Question 5.

MODELS OF DISTRIBUTED SYSTEMS

Fault Tolerance. Goals: transparent: mask (i.e., completely recover from) all failures, or predictable: exhibit a well defined failure behavior

Distributed Systems 24. Fault Tolerance

Distributed Computing. CS439: Principles of Computer Systems November 19, 2018

Replication in Distributed Systems

Global atomicity. Such distributed atomicity is called global atomicity A protocol designed to enforce global atomicity is called commit protocol

Viewstamped Replication to Practical Byzantine Fault Tolerance. Pradipta De

Distributed Systems 23. Fault Tolerance

Consensus in Distributed Systems. Jeff Chase Duke University

MODELS OF DISTRIBUTED SYSTEMS

Practical Byzantine Fault Tolerance. Miguel Castro and Barbara Liskov

TWO-PHASE COMMIT ATTRIBUTION 5/11/2018. George Porter May 9 and 11, 2018

Parallel Data Types of Parallelism Replication (Multiple copies of the same data) Better throughput for read-only computations Data safety Partitionin

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring Lecture 21: Network Protocols (and 2 Phase Commit)

Issues in Programming Language Design for Embedded RT Systems

Fault Tolerance Causes of failure: process failure machine failure network failure Goals: transparent: mask (i.e., completely recover from) all

TSW Reliability and Fault Tolerance

Concepts. Techniques for masking faults. Failure Masking by Redundancy. CIS 505: Software Systems Lecture Note on Consensus

Last Class: Consistency Models. Today: Implementation Issues

Distributed Systems. Before We Begin. Advantages. What is a Distributed System? CSE 120: Principles of Operating Systems. Lecture 13.

Transactions. CS 475, Spring 2018 Concurrent & Distributed Systems

CS 470 Spring Fault Tolerance. Mike Lam, Professor. Content taken from the following:

FAULT TOLERANT SYSTEMS

Recovery. Independent Checkpointing

Practical Byzantine Fault Tolerance (The Byzantine Generals Problem)

Network Protocols. Sarah Diesburg Operating Systems CS 3430

Replication architecture

Last Class: Web Caching. Today: More on Consistency

System Models for Distributed Systems

Networks and distributed computing

Primary-Backup Replication

Fault Tolerance. The Three universe model

Transcription:

Today: Fault Tolerance Failure models Agreement in presence of faults Two army problem Byzantine generals problem Reliable communication Distributed commit Two phase commit Three phase commit Failure recovery Checkpointing Message logging Lecture 16, page 1 Replica Management Replica server placement Web: geophically skewed request patterns Where to place a proxy? K-clusters algorithm Permanent replicas versus temporary Mirroring: all replicas mirror the same content Proxy server: on demand replication Server-initiated versus client-initiated Lecture 16, page 2

Content Distribution Will come back to this in Chap 12 CDN: network of proxy servers Caching: update versus invalidate Push versus pull-based approaches Stateful versus stateless Web caching: what semantics to provide? Lecture 16, page 3 Final Thoughts Replication and caching improve performance in distributed systems Consistency of replicated data is crucial Many consistency semantics (models) possible Need to pick appropriate model depending on the application Example: web caching: weak consistency is OK since humans are tolerant to stale information (can reload browser) Implementation overheads and complexity grows if stronger guarantees are desired Lecture 16, page 4

Fault Tolerance Single machine systems Failures are all or nothing OS crash, disk failures Distributed systems: multiple independent nodes Partial failures are also possible (some nodes fail) Question: Can we automatically recover from partial failures? Important issue since probability of failure grows with number of independent components (nodes) in the systems Prob(failure) = Prob(Any one component fails)=1-p(no failure) Lecture 16, page 5 A Perspective Computing systems are not very reliable OS crashes frequently (Windows), buggy software, unreliable hardware, software/hardware incompatibilities Until recently: computer users were tech savvy Could depend on users to reboot, troubleshoot problems Growing popularity of Internet/World Wide Web Novice users Need to build more reliable/dependable systems Example: what is your TV (or car) broke down every day? Users don t want to restart TV or fix it (by opening it up) Need to make computing systems more reliable Lecture 16, page 6

Basic Concepts Need to build dependable systems Requirements for dependable systems Availability: system should be available for use at any given time 99.999 % availability (five 9s) => very small down times Reliability: system should run continuously without failure Safety: temporary failures should not result in a catastrophic Example: computing systems controlling an airplane, nuclear reactor Maintainability: a failed system should be easy to repair Lecture 16, page 7 Basic Concepts (contd) Fault tolerance: system should provide services despite faults Transient faults Intermittent faults Permanent faults Lecture 16, page 8

Failure Models Type of failure Crash failure Omission failure Receive omission Send omission Timing failure Response failure Value failure State transition failure Arbitrary failure Description A server halts, but is working correctly until it halts A server fails to respond to incoming requests A server fails to receive incoming messages A server fails to send messages A server's response lies outside the specified time interval The server's response is incorrect The value of the response is wrong The server deviates from the correct flow of control A server may produce arbitrary responses at arbitrary times Different types of failures. Lecture 16, page 9 Failure Masking by Redundancy Triple modular redundancy. Lecture 16, page 10

Agreement in Faulty Systems How should processes agree on results of a computation? K-fault tolerant: system can survive k faults and yet function Assume processes fail silently Need (k+1) redundancy to tolerant k faults Byzantine failures: processes run even if sick Produce erroneous, random or malicious replies Byzantine failures are most difficult to deal with Need? Redundancy to handle Byzantine faults Lecture 16, page 11 Byzantine Faults Simplified scenario: two perfect processes with unreliable channel Need to reach agreement on a 1 bit message Two army problem: Two armies waiting to attack Each army coordinates with a messenger Messenger can be captured by the hostile army Can generals reach agreement? Property: Two perfect process can never reach agreement in presence of unreliable channel Byzantine generals problem: Can N generals reach agreement with a perfect channel? M generals out of N may be traitors Lecture 16, page 12

Byzantine Generals Problem Recursive algorithm by Lamport The Byzantine generals problem for 3 loyal generals and 1 traitor. a) The generals announce their troop strengths (in units of 1 kilosoldiers). b) The vectors that each general assembles based on (a) c) The vectors that each general receives in step 3. Lecture 16, page 13 Byzantine Generals Problem Example The same as in previous slide, except now with 2 loyal generals and one traitor. Property: With m faulty processes, agreement is possible only if 2m+1 processes function correctly [Lamport 82] Need more than two-thirds processes to function correctly Lecture 16, page 14

Reaching Agreement If message delivery is unbounded, No agreeement can be reached even if one process fails Slow process indistinguishable from a faulty one BAR Fault Tolerance Until now: nodes are byzantine or collaborative New model: Byzantine, Altruistic and Rational Rational nodes: report timeouts etc Lecture 16, page 15 Reliable One-One Communication Issues were discussed in Lecture 3 Use reliable transport protocols (TCP) or handle at the application layer RPC semantics in the presence of failures Possibilities Client unable to locate server Lost request messages Server crashes after receiving request Lost reply messages Client crashes after sending request Lecture 16, page 16

Reliable One-Many Communication Reliable multicast Lost messages => need to retransmit Possibilities ACK-based schemes Sender can become bottleneck NACK-based schemes Lecture 16, page 17 Atomic Multicast Atomic multicast: a guarantee that all process received the message or none at all Replicated database example Problem: how to handle process crashes? Solution: group view Each message is uniquely associated with a group of processes View of the process group when message was sent All processes in the group should have the same view (and agree on it) Virtually Synchronous Multicast Lecture 16, page 18

Implementing Virtual Synchrony in Isis a) Process 4 notices that process 7 has crashed, sends a view change b) Process 6 sends out all its unstable messages, followed by a flush message c) Process 6 installs the new view when it has received a flush message from everyone else Lecture 16, page 19 Distributed Commit Atomic multicast example of a more general problem All processes in a group perform an operation or not at all Examples: Reliable multicast: Operation = delivery of a message Distributed transaction: Operation = commit transaction Problem of distributed commit All or nothing operations in a group of processes Possible approaches Two phase commit (2PC) [Gray 1978 ] Three phase commit Lecture 16, page 20

Two Phase Commit Coordinator process coordinates the operation Involves two phases Voting phase: processes vote on whether to commit Decision phase: actually commit or abort Lecture 16, page 21 Implementing Two-Phase Commit actions by coordinator: while START _2PC to local log; multicast VOTE_REQUEST to all participants; while not all votes have been collected { wait for any incoming vote; if timeout { while GLOBAL_ABORT to local log; multicast GLOBAL_ABORT to all participants; exit; } record vote; } if all participants sent VOTE_COMMIT and coordinator votes COMMIT{ write GLOBAL_COMMIT to local log; multicast GLOBAL_COMMIT to all participants; } else { write GLOBAL_ABORT to local log; multicast GLOBAL_ABORT to all participants; } Outline of the steps taken by the coordinator in a two phase commit protocol Lecture 16, page 22

actions by participant: write INIT to local log; wait for VOTE_REQUEST from coordinator; if timeout { write VOTE_ABORT to local log; exit; } if participant votes COMMIT { write VOTE_COMMIT to local log; send VOTE_COMMIT to coordinator; wait for DECISION from coordinator; if timeout { multicast DECISION_REQUEST to other participants; Implementing 2PC wait until DECISION is received; /* remain blocked */ write DECISION to local log; } if DECISION == GLOBAL_COMMIT write GLOBAL_COMMIT to local log; else if DECISION == GLOBAL_ABORT write GLOBAL_ABORT to local log; } else { write VOTE_ABORT to local log; send VOTE ABORT to coordinator; } actions for handling decision requests: /*executed by separate thread */ while true { wait until any incoming DECISION_REQUEST is received; /* remain blocked */ read most recently recorded STATE from the local log; if STATE == GLOBAL_COMMIT send GLOBAL_COMMIT to requesting participant; else if STATE == INIT or STATE == GLOBAL_ABORT send GLOBAL_ABORT to requesting participant; else skip; /* participant remains blocked */ Lecture 16, page 23 Three-Phase Commit Two phase commit: problem if coordinator crashes (processes block) Three phase commit: variant of 2PC that avoids blocking Lecture 16, page 24

Recovery Techniques thus far allow failure handling Recovery: operations that must be performed after a failure to recover to a correct state Techniques: Checkpointing: Periodically checkpoint state Upon a crash roll back to a previous checkpoint with a consistent state Lecture 16, page 25 Independent Checkpointing Each processes periodically checkpoints independently of other processes Upon a failure, work backwards to locate a consistent cut Problem: if most recent checkpoints form inconsistenct cut, will need to keep rolling back until a consistent cut is found Cascading rollbacks can lead to a domino effect. Lecture 16, page 26

Coordinated Checkpointing Take a distributed snapshot [discussed in Lec 11] Upon a failure, roll back to the latest snapshot All process restart from the latest snapshot Lecture 16, page 27 Message Logging Checkpointing is expensive All processes restart from previous consistent cut Taking a snapshot is expensive Infrequent snapshots => all computations after previous snapshot will need to be redone [wasteful] Combine checkpointing (expensive) with message logging (cheap) Take infrequent checkpoints Log all messages between checkpoints to local stable storage To recover: simply replay messages from previous checkpoint Avoids recomputations from previous checkpoint Lecture 16, page 28