System Malfunctions. Implementing Atomicity and Durability. Failures: Crash. Failures: Abort. Log. Failures: Media

Similar documents
Chapter 17: Recovery System

Failure Classification. Chapter 17: Recovery System. Recovery Algorithms. Storage Structure

Chapter 16: Recovery System. Chapter 16: Recovery System

RECOVERY CHAPTER 21,23 (6/E) CHAPTER 17,19 (5/E)

Recoverability. Kathleen Durant PhD CS3200

Advanced Database Management System (CoSc3052) Database Recovery Techniques. Purpose of Database Recovery. Types of Failure.

Recovery System These slides are a modified version of the slides of the book Database System Concepts (Chapter 17), 5th Ed McGraw-Hill by

Chapter 17: Recovery System

Introduction. Storage Failure Recovery Logging Undo Logging Redo Logging ARIES

Recovery System These slides are a modified version of the slides of the book Database System Concepts (Chapter 17), 5th Ed

Database Management Systems Reliability Management

Problems Caused by Failures

ARIES (& Logging) April 2-4, 2018

) Intel)(TX)memory):) Transac'onal) Synchroniza'on) Extensions)(TSX))) Transac'ons)

CS 4604: Introduc0on to Database Management Systems. B. Aditya Prakash Lecture #19: Logging and Recovery 1

Carnegie Mellon Univ. Dept. of Computer Science Database Applications. General Overview NOTICE: Faloutsos CMU SCS

Transaction Management: Crash Recovery (Chap. 18), part 1

some sequential execution crash! Recovery Manager replacement MAIN MEMORY policy DISK

ACID Properties. Transaction Management: Crash Recovery (Chap. 18), part 1. Motivation. Recovery Manager. Handling the Buffer Pool.

) Intel)(TX)memory):) Transac'onal) Synchroniza'on) Extensions)(TSX))) Transac'ons)

Chapter 14: Recovery System

Recovery and Logging

Recovery Techniques. The System Failure Problem. Recovery Techniques and Assumptions. Failure Types

Transactions and Recovery Study Question Solutions

Lecture 21: Logging Schemes /645 Database Systems (Fall 2017) Carnegie Mellon University Prof. Andy Pavlo

) Intel)(TX)memory):) Transac'onal) Synchroniza'on) Extensions)(TSX))) Transac'ons)

Crash Recovery CMPSCI 645. Gerome Miklau. Slide content adapted from Ramakrishnan & Gehrke

Last Class Carnegie Mellon Univ. Dept. of Computer Science /615 - DB Applications

Database Recovery. Lecture #21. Andy Pavlo Computer Science Carnegie Mellon Univ. Database Systems / Fall 2018

DBS related failures. DBS related failure model. Introduction. Fault tolerance

COURSE 4. Database Recovery 2

Recovery from failures

Elena Baralis, Silvia Chiusano Politecnico di Torino. Reliability Management. DBMS Architecture D B M G. Database Management Systems. Pag.

Chapter 9. Recovery. Database Systems p. 368/557

Atomicity: All actions in the Xact happen, or none happen. Consistency: If each Xact is consistent, and the DB starts consistent, it ends up

Lecture X: Transactions

CHAPTER 3 RECOVERY & CONCURRENCY ADVANCED DATABASE SYSTEMS. Assist. Prof. Dr. Volkan TUNALI

A tomicity: All actions in the Xact happen, or none happen. D urability: If a Xact commits, its effects persist.

CS122 Lecture 15 Winter Term,

Log-Based Recovery Schemes

ARIES. Handout #24. Overview

Database Technology. Topic 11: Database Recovery

Outline. Purpose of this paper. Purpose of this paper. Transaction Review. Outline. Aries: A Transaction Recovery Method

CS 541 Database Systems. Implementation of Undo- Redo. Clarification

Transaction Management. Pearson Education Limited 1995, 2005

UNIT 9 Crash Recovery. Based on: Text: Chapter 18 Skip: Section 18.7 and second half of 18.8

Crash Recovery Review: The ACID properties

Database Recovery Techniques. DBMS, 2007, CEng553 1

Crash Recovery. The ACID properties. Motivation

XI. Transactions CS Computer App in Business: Databases. Lecture Topics

Advances in Data Management Transaction Management A.Poulovassilis

Crash Recovery. Chapter 18. Sina Meraji

CS122 Lecture 18 Winter Term, All hail the WAL-Rule!

Review: The ACID properties. Crash Recovery. Assumptions. Motivation. More on Steal and Force. Handling the Buffer Pool

Review: The ACID properties. Crash Recovery. Assumptions. Motivation. Preferred Policy: Steal/No-Force. Buffer Mgmt Plays a Key Role

Architecture and Implementation of Database Systems (Summer 2018)

Last time. Started on ARIES A recovery algorithm that guarantees Atomicity and Durability after a crash

Carnegie Mellon Univ. Dept. of Computer Science /615 - DB Applications. Administrivia. Last Class. Faloutsos/Pavlo CMU /615

Crash Recovery. Hector Garcia-Molina Stijn Vansummeren. CS 245 Notes 08 1

PART II. CS 245: Database System Principles. Notes 08: Failure Recovery. Integrity or consistency constraints. Integrity or correctness of data

Database Management System

Transaction Management

CS5200 Database Management Systems Fall 2017 Derbinsky. Recovery. Lecture 15. Recovery

NOTES W2006 CPS610 DBMS II. Prof. Anastase Mastoras. Ryerson University

Unit 9 Transaction Processing: Recovery Zvi M. Kedem 1

Database Management System Prof. D. Janakiram Department of Computer Science & Engineering Indian Institute of Technology, Madras Lecture No.

The transaction. Defining properties of transactions. Failures in complex systems propagate. Concurrency Control, Locking, and Recovery

CAS CS 460/660 Introduction to Database Systems. Recovery 1.1

Databases: transaction processing

In This Lecture. Transactions and Recovery. Transactions. Transactions. Isolation and Durability. Atomicity and Consistency. Transactions Recovery

Transaction Processing. Introduction to Databases CompSci 316 Fall 2018

CompSci 516: Database Systems

Weak Levels of Consistency

Carnegie Mellon Univ. Dept. of Computer Science /615 - DB Applications. Last Class. Today s Class. Faloutsos/Pavlo CMU /615

Slides Courtesy of R. Ramakrishnan and J. Gehrke 2. v Concurrent execution of queries for improved performance.

Transaction Management. Readings for Lectures The Usual Reminders. CSE 444: Database Internals. Recovery. System Crash 2/12/17

DHANALAKSHMI COLLEGE OF ENGINEERING, CHENNAI

Chapter 22. Introduction to Database Recovery Protocols. Copyright 2012 Pearson Education, Inc.

Transactions: Recovery

CS 541 Database Systems. Recovery Managers

CSE 544 Principles of Database Management Systems. Fall 2016 Lectures Transactions: recovery

CS 377 Database Systems Transaction Processing and Recovery. Li Xiong Department of Mathematics and Computer Science Emory University

Transaction Management Overview. Transactions. Concurrency in a DBMS. Chapter 16

Introduction to Database Systems CSE 444

Transactions. Transaction. Execution of a user program in a DBMS.

) Intel)(TX)memory):) Transac'onal) Synchroniza'on) Extensions)(TSX))) Transac'ons)

Database Recovery. Dr. Bassam Hammo

Distributed Systems

Concurrency Control & Recovery

Database Systems ( 資料庫系統 )

Overview of Transaction Management

Goal A Distributed Transaction

CompSci 516 Database Systems

Aries (Lecture 6, cs262a)

Distributed Transaction Management

6.033 Lecture Logging April 8, saw transactions, which are a powerful way to ensure atomicity

Transaction Management Overview

DATABASE DESIGN I - 1DL300

Transaction Management Part II: Recovery. vanilladb.org

Transcription:

System Malfunctions Implementing Atomicity and Durability Chapter 22 Transaction processing systems have to maintain correctness in spite of malfunctions Crash Abort Media Failure 1 2 Failures: Crash Processor failure, software bug Program behaves unpredictably, destroying contents of main (volatile) memory Contents of mass store (non-volatile memory) generally unaffected Active transactions interrupted, database left in inconsistent state Server supports atomicity by providing a recovery procedure to restore database to consistent state Since rollforward is generally not feasible, recovery rolls active transactions back Failures: Abort Causes: User (e.g., cancel button) Transaction (e.g., deferred constraint check) System (e.g., deadlock, lack of resources) The technique used by the recovery procedure supports atomicity Roll transaction back 3 4 Failures: Media Durability requires that database state produced by committed transactions be preserved Possibility of failure of mass store implies that database state must be stored redundantly (in some form) on independent non-volatile devices 5 Log Sequence of records (sequential file) Modified by appending (no updating) Contains information from which database can be reconstructed Read by routines that handle abort and crash recovery Log and database stored on different mass storage devices Often replicated to survive media failure Contains valuable historical data not in database How did database reach current state? 6 1

Log Each modification of the database causes an update record to be appended to log Update record contains: Identity of data item modified Identity of transaction (tid) that did the modification Before image (undo record) copy of data item before update occurred Referred to as physical logging Log x y z u y w z T 1 T 1 T 2 T 3 T 1 T 4 T 2 17 A 2.4 18 ab 3 4.5 Update records in a log most recent database update 7 8 Transaction Abort Using Log Transaction Abort Using Log Scan log backwards using tid to identify transaction s update records Reverse each update using before image Reversal done in last-in-first-out order In a strict system new values unavailable to concurrent transactions (as a result of long term exclusive locks); hence rollback makes transaction atomic Problem: terminating scan (log can be long) Solution: append a begin record for each transaction, containing tid, prior to its first update record 9 B U U U U U U U x y z u y w z T 1 T 1 T 1 T 2 T 3 T 1 T 4 T 2 17 A 2.4 18 ab 3 4.5 Key: B begin record U update record Abort Procedure: Scan back to begin record using update records to reverse changes abort T 1 10 Logging Savepoints Crash Recovery Using Log Savepoint record inserted in log when savepoint created Contains tid, savepoint identity Rollback Procedure: Scan log backwards using tid to identify update records Undo updates using before image Terminate scan when appropriate savepoint record encountered 11 Abort all transactions active at time of crash Problem: How do you identify them? Solution: abort record or commit record appended to log when transaction terminates Recovery Procedure: Scan log backwards - if T s first record is an update record, T was active at time of crash. Roll it back A transaction is not committed until its commit record is in the log 12 2

Crash Recovery Using Log B U U U U U C U A U x y z u y w z T 1 T 1 T 1 T 2 T 3 T 1 T 3 T 4 T 1 T 2 17 A 2.4 18 ab 3 4.5 Key: B begin record U update record C commit record A abort record T 1 and T 3 were not active at time of crash crash Crash Recovery Using Log Problem: Scan must retrace entire log Solution: Periodically append checkpoint record to log. Contains tid s of all active transactions at time of append Backward scan goes at least as far as last checkpoint record appended Transactions active at time of crash determined from log suffix that includes last checkpoint record Scan continues until those transactions have been rolled back 13 14 Example Backward scan B2 B3 U2 B1 C2 B5 U3 U5 A5 CK U1 U4 B6 C4 U6 U1 Write-Ahead Log When x is updated two writes must occur: update x in database, append of update log record Which goes first?..update x; append to log. Key: U - update record B - begin record C - commit record A - abort record CK - checkpoint record T1 T4 T3 T 1, T 3 and T 6 active at time of crash crash 15 crash crash crash (no before image in log)..append to log; update x. crash crash crash (use before image; it has no effect) 16 Write-Ahead Log: Performance Problem: two I/O operations for each database update Solution: log buffer in main memory Extension of log on mass store Periodically flushed to mass store Flush cost pro-rated over multiple log appends This effectively reduces the cost to one I/O operation for each database update Performance Problem: one I/O operation for each database update Solution: database page cache in main memory Page is unit of transfer Page containing requested item is brought to cache; then a copy of the item is transferred to application Retain page in cache for future use Check cache for requested item before doing I/O (I/O can be avoided) 17 18 3

Page and Log Buffering Cache Management mass store main memory database cache log log buffer Cache pages that have been updated are marked dirty; others are clean Cache ultimately fills Clean pages can simply be overwritten Dirty pages must be written to database before page frame can be reused 19 20 Atomicity, Durability and Buffering Problem: page and log buffers are volatile Their use affects the time data becomes non-volatile Complicates algorithms for atomicity and durability Requirements: Write-ahead feature (move update records to log on mass store before database is updated) necessary to preserve atomicity New values written by a transaction must be on mass store when its commit record is written to log (move new values to mass store before commit record) to preserve durability Transaction not committed until commit record in log on mass store Solution: requires new mechanisms 21 New Mechanism 1 Forced vs. Unforced Writes: On database page Unforced write updates cache page, marks it dirty and returns control immediately. Forced write updates cache page, marks it dirty, uses it to update database page on disk, and returns control when I/O completes. On log Unforced append adds record to log buffer and returns control immediately. Forced append, adds record to log buffer, writes buffer to log, and returns control when I/O completes. 22 New Mechanism 2 Log Sequence Number (LSN): Log records are numbered sequentially Each database page contains the LSN of the update record describing the most recent update of any item in the page 8 9 x 17 10 11 12 y 17 log 13 LSN x y 12 Database page 17 23 Preserving Atomicity: the Write- Ahead Property and Buffering Problem 1: When the cache page replacement algorithm decides to write a dirty page, p, to mass store, an update record corresponding to p might still be in the log buffer. Solution: Force the log buffer if the LSN stored in p is greater than or equal to the LSN of the oldest record in the log buffer. Then write p. This preserves write-ahead policy. 24 4

Preserving Durability I Problem 2: Pages updated by T might still be in cache when T s commit record is appended to log buffer. Once commit record is in log buffer, it may be flushed to log at any time, causing a violation of durability. Solution: Force the (dirty) pages in the cache that have been updated by T before appending T s commit record to log buffer (force policy). 25 Force Policy for Commit Processing 1. Force any update records of T in log buffer then 2. Force any dirty pages updated by T in cache then (1) and (2) ensure atomicity (write-ahead policy) 3. Append T s commit record to log buffer then Force log buffer for immediate commit or Write log buffer when a group of transactions have committed (group commit) (2) and (3) ensure durability 26 Force Policy for Commit Processing s x old 2 j x new database cache LSN 1 log r+1 j k x old update record for T r commit record for T log buffer 27 3 Advantage: Force Policy Transaction s updates are in database (on mass store) when it commits. Disadvantages: Commit must wait until dirty cache pages are forced Pages containing items that are updated by many transactions (hotspots) have to be forced with the commit of each such transaction but an LRU page replacement algorithm would not write such a page out 28 Preserving Durability II Problem 2: Pages updated by T might still be in cache when T s commit record is appended to log buffer Solution: Update record contains after image (called a redo record) as well as before image Write-ahead property still requires that update record be written to mass store before page But it is no longer necessary to force dirty pages when commit record is written to log on mass store since all after images precede commit record in log Referred to as a no-force policy 29 No-Force Commit Processing Append T s commit record to log buffer Force buffer for immediate commit T s update records precede its commit record in buffer ensuring updates are durable before (or at the same time as) it commits T s dirty pages can be flushed from cache at any time after update records have been written Necessary for write-ahead policy T s dirty pages can be written before or after commit record 30 5

No Force Policy for Commit Processing s x old j x new database 2 cache LSN 1 log r+1 j k x old x new update record for T r commit record for T log buffer 31 Advantages: No-Force Policy Commit doesn t wait until dirty pages are forced Pages with hotspots don't have to be written out Disadvantage: Crash recovery complicated: some updates of committed transactions (contained in redo records) might not be in database on restart after crash Update records are larger 32 Recovery With No-Force Policy Recovery With No-Force Policy Problem: When a crash occurs there might exist some pages in database (on mass store) containing updates of uncommitted transaction: they must be rolled back that do not (but should) contain the updates of committed transactions: they must be rolled forward Solution: Use a sharp checkpoint 33 p 1 x old p 2 y new database U U C p 1 p 2 T 1 T 2 T 1 x old x new y old y new log crash T 1 committed T 2 active p 2 flushed p 1 not flushed p 1 must be rolled forward using x new p 2 must be rolled back using y old 34 Sharp Checkpoint Problem: How far back must log be scanned in order to find update records of committed transactions that must be rolled forward? Solution: Before appending a checkpoint record, CK, to log buffer, halt processing and force all dirty pages from cache Recovery process can assume that all updates in records prior to CK were written to database (only updates in records after CK might not be in database) Recovery with Sharp Checkpoint Pass 1: Log is scanned backward to most recent checkpoint record, CK, to identify transactions active at time of crash. Pass 2: Log is scanned forward from CK to most recent record. The after images in all update records are used to roll the database forward. Pass 3: Log is scanned backwards to begin record of oldest transaction active at time of crash. The before images in the update records of these transactions are used to roll these transactions back. 35 36 6

Recovery with Sharp Checkpoint Issue 1: Database pages containing items updated after CK was appended to log might have been flushed before crash No problem with physical logging, roll forward using after images in pass 2 is idempotent. Rollforward in this case is unnecessary, but not harmful Recovery with Sharp Checkpoint Issue 2: Some update records after CK might belong to an aborted transaction, T 1. These updates will not be rolled back in pass 3 since T 1 was not active at time of crash Treat rollback operations for aborting T 1 as ordinary updates and append compensating log records to log CK U 1 x old x new CL 1 x new x old A 1 37 before images crash 38 Recovery with Sharp Checkpoint Fuzzy Checkpoints Issue 3: What if system crashes during recovery? Recovery is restarted If physical logging is used, pass 2 and pass 3 operations are idempotent and hence can be redone Problem: Cannot stop the system to take sharp checkpoint (write dirty pages). Use fuzzy checkpoint: Before writing CK, record the identity of all dirty pages (do not flush them) in volatile memory All recorded pages must be flushed before next checkpoint record is appended to log buffer 39 40 Fuzzy Checkpoints U 1 CK 1 U 2 CK 2 Page corresponding to U 1 is recorded at CK 1 and will have been flushed by CK 2 Page corresponding to U 2 is recorded at CK 2, but might not have been flushed at time of crash Pass 2 must start at CK 1 crash 41 Archiving the Log Problem: What do you do when the log fills mass store? Initial portions of log are not generally discarded since they contain important data: Record of how database got to its current state Information for analyzing performance Solution: Archive the initial portion of the log on tertiary storage. Only the portion of the log containing records of active transactions needs to be maintained on secondary store 42 7

Logical Logging Problem: With physical logging, simple database updates can result in multiple update records with large before and after images Example insert t in T might cause reorganization of a data page and an index page for each index. Before and after images might be entire pages Solution: Log the operation and its inverse instead of before and after images Example - store insert t in T, delete t from T in update record 43 Logical Logging Problem 1: Logical operations might not be idempotent (e.g., T x = x+5 ) Pass 2 roll forward does not work (it makes a difference whether the page on mass store was updated before the crash or after the crash) Solution: Do not apply operation in update record i to database item in page P during pass 2 if P.LSN i 44 Logical Logging Problem 2: Operations are not atomic A crash during the execution of a non-atomic operation can leave the database in a physically inconsistent state Example - insert t in T requires an update to both a data and an index page. A crash might occur after t has been inserted in T but before the index has been updated Applying a logical redo operation in pass 2 to a physically inconsistent state is not likely to work Example - There might be two copies of t in T after pass 2 45 Physiological Logging Solution: Use physical-to-a-page, logicalwithin-a-page logging (physiological logging) A logical operation involving multiple pages is broken into multiple logical mini-operations Each mini-operation is confined to a single page and hence is atomic Example - insert t in T becomes insert t in a page of T and insert pointer to t in a page of index Each mini-operation gets a separate log record Since mini-operations are not idempotent, use LSN check before applying operation in pass 2 46 Deferred-Update System Update - append new value to intentions list (in volatile memory); append update record (containing only after image) to log buffer; write-ahead property does not apply since there is no before image Abort - discard intentions list Commit - force commit record to log; initiate database update using intentions list Completion of intentions list processing - write completion record to log 47 Recovery in Deferred-Update System Checkpoint record - contains list of committed (not active) but incomplete transactions Recovery - Scan back to most recent checkpoint record to determine transactions that are committed but for which updates are incomplete at time of crash Scan forward to install after images for incomplete transactions No third pass required since transactions active (not committed) at time of crash have not affected database 48 8

Media Failure Simple Dump Durability requires that the database be stored redundantly on distinct mass storage devices 1. Redundant copy on (mirrored) disk => high availability - Log still needed to achieve atomicity after an abort or crash 2. Redundant data in log Problem: Using the log (as in 2 above) to reconstruct the database is impractical since it requires a scan starting at first record Solution: Use log together with a periodic dump Simple dump System stops accepting new transactions Wait until all active transactions complete Dump: copy entire database to a file on mass storage Restart log and system 49 50 Restoring Database From Simple Dump Install most recent dump file Scan backward through log Determine transactions that committed since dump was taken Ignore aborted transactions and those that were active when media failed Scan forward through log Install after images of committed transactions Fuzzy Dump Problem: The system cannot be shut down to take a simple dump Solution: Use a fuzzy dump Write begin dump record to log Copy database records to dump file while system active Even copying records of active transactions and records that are locked 51 52 Fuzzy Dump Dump file might: reflect incomplete execution of an active transaction that later commits w T (x) dump(x) dump(y) w T (y) commit T reflect updates of an active transaction that later aborts w T (x) dump(x) abort T time time 53 Naïve Restoration Using Fuzzy Dump Install dump on disk Scan log backwards to begin dump record to produce list, L, of all transactions that committed since start of dump Scan log forward and install after images in update records of all transactions in L 54 9

Naïve Restoration Using Fuzzy Dump - It does some things correctly w T (x) w T (y) commit T time start dump dump(x,y) end dump T in L; roll it forward Naïve Restoration Using Fuzzy Dump Problem: Naïve algorithm does not handle two cases: T commits before dump starts but its dirty pages might not have been flushed until dump completed Dump does not read T s updates and T is not in L. Dump reads T s updates but T later aborts: w T (x) abort T begin T w T (x) abort T time start dump dump(x) end dump time start dump end dump T not in L; do not roll it forward 55 56 Taking a Fuzzy Dump Solution: Use fuzzy checkpointing and compensating log records Dump algorithm: Write checkpoint record Write begin dump record (BD) Dump Write end dump record (ED) Restoration Using Fuzzy Dump Install dump on mass storage device Scan backward to CK3 to produce list, L, of all transactions active at time of media failure Scan forward from CK1; use redo records to roll the database forward to its state at time of media failure Scan backwards to begin record of oldest transaction in L, roll all transactions in L back all dirty pages in cache at time of CK1 have been written to database CK1 CK2 BD ED CK3 media failure 57 58 10