Cost Reduction of Replicated Data in Distributed Database System

Similar documents
The Tree Quorum Protocol: An Efficient Approach for Managing Replicated Data*

CSE 544 Principles of Database Management Systems. Alvin Cheung Fall 2015 Lecture 14 Distributed Transactions

Chapter 11 - Data Replication Middleware

10. Replication. CSEP 545 Transaction Processing Philip A. Bernstein Sameh Elnikety. Copyright 2012 Philip A. Bernstein

10. Replication. CSEP 545 Transaction Processing Philip A. Bernstein. Copyright 2003 Philip A. Bernstein. Outline

Data Replication Model For Remote Procedure Call Transactions

CSE 444: Database Internals. Section 9: 2-Phase Commit and Replication

Databases. Laboratorio de sistemas distribuidos. Universidad Politécnica de Madrid (UPM)

Data Modeling and Databases Ch 14: Data Replication. Gustavo Alonso, Ce Zhang Systems Group Department of Computer Science ETH Zürich

Henning Koch. Dept. of Computer Science. University of Darmstadt. Alexanderstr. 10. D Darmstadt. Germany. Keywords:

CSE 444: Database Internals. Lecture 25 Replication

Distributed Database Management System UNIT-2. Concurrency Control. Transaction ACID rules. MCA 325, Distributed DBMS And Object Oriented Databases

A Novel Quorum Protocol for Improved Performance

Replication in Distributed Systems

VALIDATING AN ANALYTICAL APPROXIMATION THROUGH DISCRETE SIMULATION

An Evaluation Framework of Replication Protocols in Mobile Environment

Everything You Need to Know About MySQL Group Replication

Correctness Criteria Beyond Serializability

Advanced Databases Lecture 17- Distributed Databases (continued)

Distributed KIDS Labs 1

CSE 5306 Distributed Systems. Consistency and Replication

Mutual consistency, what for? Replication. Data replication, consistency models & protocols. Difficulties. Execution Model.

Outline. INF3190:Distributed Systems - Examples. Last week: Definitions Transparencies Challenges&pitfalls Architecturalstyles

A New Binary Vote Assignment on Grid Algorithm to Manage Replication in Distributed Database Environment

Building Consistent Transactions with Inconsistent Replication

Achieving Robustness in Distributed Database Systems

DISTRIBUTED SYSTEMS Principles and Paradigms Second Edition ANDREW S. TANENBAUM MAARTEN VAN STEEN. Chapter 7 Consistency And Replication

Integrity in Distributed Databases

Basic vs. Reliable Multicast

Symmetric Tree Replication Protocol for Efficient Distributed Storage System*

Correctness Criteria Beyond Serializability

Replicated Data Management in Distributed Database Systems

Consistency and Replication. Some slides are from Prof. Jalal Y. Kawash at Univ. of Calgary

Replication and Consistency. Fall 2010 Jussi Kangasharju

Mobile and Heterogeneous databases Distributed Database System Transaction Management. A.R. Hurson Computer Science Missouri Science & Technology

read (Q) write (Q) read(q ) write( Q)

Chapter 7 Consistency And Replication

Distributed Data Management Replication

It also performs many parallelization operations like, data loading and query processing.

殷亚凤. Consistency and Replication. Distributed Systems [7]

Control. CS432: Distributed Systems Spring 2017

CSE 5306 Distributed Systems

Dynamic Management of Highly Replicated Data

Optimistic Preventive Replication in a Database Cluster

On the Study of Different Approaches to Database Replication in the Cloud

TAPIR. By Irene Zhang, Naveen Sharma, Adriana Szekeres, Arvind Krishnamurthy, and Dan Ports Presented by Todd Charlton

Transactions with Replicated Data. Distributed Software Systems

Linearizability CMPT 401. Sequential Consistency. Passive Replication

9 Replication 1. Client

Site 1 Site 2 Site 3. w1[x] pos ack(c1) pos ack(c1) w2[x] neg ack(c2)

Data Warehousing Alternatives for Mobile Environments

Database Replication

Distributed systems. Lecture 6: distributed transactions, elections, consensus and replication. Malte Schwarzkopf

Consistency and Replication

Database Management Systems

An Available Copy Protocol Tolerating Network Partitions

MySQL Group Replication. Bogdan Kecman MySQL Principal Technical Engineer

CS /15/16. Paul Krzyzanowski 1. Question 1. Distributed Systems 2016 Exam 2 Review. Question 3. Question 2. Question 5.

Managing Copy Services

Google File System, Replication. Amin Vahdat CSE 123b May 23, 2006

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY

Concurrency Control in Distributed Database System

Model-based Run-Time Software Adaptation for Distributed Hierarchical Service Coordination

SQL, NoSQL, MongoDB. CSE-291 (Cloud Computing) Fall 2016 Gregory Kesden

Deadlock Managing Process in P2P System

Consistency and Replication (part b)

Transaction Management using Causal Snapshot Isolation in Partially Replicated Databases. Technical Report

Distributed System. Gang Wu. Spring,2018

Distributed Transaction Management

Physical Storage Media

Distributed Transaction Processing in the Escada Protocol

Assignment 12: Commit Protocols and Replication Solution

Replication. Some uses for replication:

Consistency and Replication 1/62

Lecture XII: Replication

Distributed Systems Question Bank UNIT 1 Chapter 1 1. Define distributed systems. What are the significant issues of the distributed systems?

A New Approach to Developing and Implementing Eager Database Replication Protocols

Large-Scale Key-Value Stores Eventual Consistency Marco Serafini

Cache Coherence in Distributed and Replicated Transactional Memory Systems. Technical Report RT/4/2009

Update Propagation of Replicated Data in Distributed Spatial Databases

Adapting Commit Protocols for Large-Scale and Dynamic Distributed Applications

Chapter 11: File System Implementation. Objectives

Consistency and Replication

Multiview Access Protocols for Large- Scale Replication

Chapter 4: Distributed Systems: Replication and Consistency. Fall 2013 Jussi Kangasharju

Module 7 - Replication

The Replication Technology in E-learning Systems

Consistency and Replication 1/65

Dynamic Broadcast Scheduling in DDBMS

ECE519 Advanced Operating Systems

A METHOD FOR SELECTIVE UPDATE PROPAGATION IN REPLICATED DATABASES MICHAEL JACOB JOLSON. Submitted in partial fulfillment of the requirements

Lazy Database Replication with Freshness Guarantees

Priya Narasimhan. Assistant Professor of ECE and CS Carnegie Mellon University Pittsburgh, PA

Distributed Databases

CS October 2017

Important Lessons. A Distributed Algorithm (2) Today's Lecture - Replication

Scalability. ! Load Balancing. ! Provisioning/auto scaling. ! State Caching Commonly used items of the database are kept in memory. !

Last Class:Consistency Semantics. Today: More on Consistency

NoSQL systems: sharding, replication and consistency. Riccardo Torlone Università Roma Tre

10. Replication. Motivation

Transcription:

Cost Reduction of Replicated Data in Distributed Database System 1 Divya Bhaskar, 2 Meenu Department of computer science and engineering Madan Mohan Malviya University of Technology Gorakhpur 273010, India divyabh70@gmail.com, Myself_meenu@yahoo.co.in Abstract In this paper, we are proposing a new replica control algorithm NCP (node child protocol) for the management of replicated data in distributed database system. The algorithms impose logical tree structure on a set of copies of an object. The proposed protocols reduce the quorum size of the read and write quorum. With this algorithm read operation is executed by reading one copy in a failure free environment and if the failure occur then also it read single copy.the less number data copies required for the write operation and it provide low write operation cost then the other protocols. Keywords Replication, message cost, quorum protocol, I. INTRODUCTION A distributed database system is consist of a set of computers (called site) which is distributed in several location. In a distributed system processors are loosely coupled site that share no physical components. Replication is a useful technique for distributed database system where reliability is important, such as banking system, airline reservation system etc. replication is process to maintain multiple copies of the data at different site. In a distributed system data is replicated to achieve fault-toltrence. One of the most important advantage of replication is that it mask and tolerate failure in a network gracefully. If failure occur even then system remain operational and available to the user. However there are complex and expensive algorithms are built to maintain the replicas. And also these algorithm help to reduce the cost of executing operation and also maintaining the availability of the replicated data. In a replicated database, copies of an object may be stored at several sites in the network. Multiple copies may appear as a single logical object to the transaction. This is termed as onecopy equivalence and is enforced by the replica control protocol. The correctness criteria for replicated database is one-copy serializability, which ensure one-copy equivalence and serializability execution of transaction. The replication can increase the availability of data item in a distributed database system it means if one of the site fail then transaction may continue by accessing data from another site also. In this way it achieve fault-tolerence. A. Type of replication The replication tools may be selected based on type of replication it supports. The capabilities and performance characteristics varies from one type of replication to another. A replication strategy may be selected based on two basic characteristics: Where and When. When the data is updated at one site, the updates have to be propagated to the respective replicas. When the updates can be propagated can be achieved by Synchronous (eager) and Asynchronous (lazy) methods and where the updates can take place can be achieved by update everywhere and primary copy (master-slave) methods. Synchronous replication (Master-Slave replication) works on the principle of Two-Phase commit protocol. In a twophase commit protocol, when an update to the master database is requested, the master system connects to all other systems (slave databases), locks those databases at the record level and then updates them simultaneously. If one of the slaves is not available, the data may not be updated. The consistency of data is preserved; however it requires availability of all sites at the time of propagation of updates. There exists two variations of Asynchronous replication (Store and Forward replication) i.e. Periodic and Aperiodic. In Periodic replication, the updates to data items are done at specific intervals and in aperiodic replication the updates are propagated only when necessary (usually based on firing of event in a trigger). The time at which the copies are inconsistent is an adjustable parameter which is application dependent In Update anywhere method, the update propagation can be initiated by any of the sites. All sites are allowed to update the copy of the datum whereas in a Primary Copy method there is only one copy (primary copy or master) which can be updated and all other (secondary or slave) copies are updated reflecting the changes to the master. Various forms of replication strategies are as follows: Snapshot Replication: In snapshot replication, a snapshot or copy of data is taken from one server and moved to another server or to another database on the same server. After the initial synchronization, snapshot replication can refresh data in published tables periodically. Though snapshot replication is easiest form of replication, it requires copying all data items each time a table is refreshed. Transactional Replication: In transactional replication, the replication agent monitors the server for changes to the database and transmits those changes to the other backup Divya Bhaskar and Meenu ijesird, Vol. I (III) September 2014/ 124

servers. This transmission can take place immediately or on periodic basis. Transactional Replication is used for serverserver scenarios. Merge Replication: Merge replication allows the replicas to work independently. Both entities can work offline. When they are connected, the merge replication agent checks for changes on both sets of data and modifies each database accordingly. If transaction conflict occurs, it uses a predefined conflict resolution algorithm to achieve consistency. Merge replication is used mostly in wireless environments. Statement based replication: The statement based replication intercepts every SQL query and sends it to different replicas. Each replica (server) operates independently. To resolve conflicts, Read-Write queries are sent to all servers where as read only queries can be sent to only one server. This enables the read workload to be distributed. Statement based replication is applicable for optimistic approaches where each cache maintains the same replica. II. RELATED WORK A. Read one write all (ROWA): The simplest protocol for management of replicated data in distributed database system is read one write all (ROWA), in this read operation are allowed to read one copy in a failure free environment, and in write operation it allow to write all copy of the data item. The read one write all protocol provides read operation with a high degree of availability at very low cost: a read operation accesses a single copy. On the other hand, this protocol severely restricts the availability of write operation since they cannot be executed after the failure of the copy. Thus if there are N copies in the system then the cost of read operation is 1 and the cost of write operation is N in a failure-free environment, this protocol restrict the availability of write operation since they cannot be executed after the failure of any copy. MESSAGE COST ANALYSIS: In ROWA, a read operation read only one copy and a write operation is required to write all the copies of a data item. Thus if there are N copies in the system, C rowar = 1 and C rowaw = N. B Voting protocol: In a this protocol every replicated file assigned some number of votes. Each read operation collects a read quorum of r votes to read a file, and write quorum of w votes to write a file, such that r + w is greater than the total number of votes assigned to the file. This ensure that there is a non-null intersection between every read and write quorum. An appropriate r, w are choose to control reliability and performance characteristic of a replicated file. MESSAGE COST ANALYSIS: In voting, if the majority consensus method is used, the quorum size for both read and write operation can be the same, i.e., the majority of the total number of votes assigned to copies. Thus the cost C VTR and C VRW are the same as [ (N + 1) /2], where N is the total number of votes assigned to copies. C Tree Quorum Protocol: In tree quorum protocol, replica control protocol is used to reduce the cost of execution of the read and write operation without the need for reconfiguration. This is achieve by imposing logical tree structure on the set of copies of each object. We describe a protocol that operates by reading one copy of an object while guaranteeing fault tolerance of write operations and still does not require any reconfiguration on account of a failure and subsequent recovery. The protocol provides a comparable degree of data availability as other replica control protocols at substantially lower costs. Furthermore our approach is fault-tolerant, and exhibits the property of graceful degradation. In a failure free environment, the communication costs are minimal and as failures occur the cost of replica control may increase. However, when failures are repaired the protocol reverts to its original mode without undergoing any reconfiguration. MESSAGE COST ANALYSIS: In TQ to estimate the cost of operation, it take h as a height of the tree, D is the degree of node in the tree, and M is the majority of D i.e., M=(D+1)/2. In tree quorum protocol, read quorum is 1 if root is accessible and if root fails then read quorum is the majority of its children, thus quorum size is M. so the range of read operation is from 1 to (d+1) h :1 C rqw M h. In TQ, all the write quorum have the same size: the root is selected and the majority of the children of each node selected are included recursively. Thus the cost of write operation C rqw can be represented as C rqw = [(d+1) h+1-1]/d III. PROPOSED WORK In this Section, we present a new protocol for the management of replicated data item in distributed database system. In this protocol we impose the data items on a tree structure with a well defined root. The tree contain 13 nodes in it with degree 3 and height 3 In this approach, quorums are constructed by selecting node and its single child in logical tree structure of data copies. A. Construction of read quorum For a read quorum, the recursive function ReadQuorum is called with the root of the tree. The read quorum can be formed by selecting root node if root node is accessible if the Divya Bhaskar and Meenu ijesird, Vol. I (III) September 2014/ 125

root node is not accessible then the any single child of the root is selected and the child is also not accessible then the any one child of the selected child is selected. It does not require any reconfiguration in case of a failure and subsequent recovery. This protocol provides a comparable degree of data availability too. Fig. 1 shows a tree of degree 3 and of height 3 having 13 nodes. B. Construction of write quorum For a write quorum, the recursive function WriteQuorum is called with the root of the tree. The write quorum can be constructed by selecting root and any single child of the root and any single child of the selected child and the same process continues depending on the height of the tree. Figure 2. Algorithm for read and write quorum FUNCTION ReadQuorum(Tree : TREE) : QUORUM; VAR MajorityQuorum, Majority: QUORUM; BEGIN IF Empty(Tree) THEN RETURN({}); IF Tree T Boot is read accessible THEN RETURN(TTree T.Root); (* Collect majority of subtrees to substitute for the root of the subtree *) MajorityQuorum = UiEMajority ReadQuorum(Tsee T.SubTree[zj);s IF Unable to collect a majority THEN RETURN(MajorityQuorum); END; (* IF l ) END; (* IF ) END ReadQuorum FUNCTION WriteQuorum(Tree : TREE) : QUORUM: VAR SubTrees, Majority: QUORUM; BEGIN IF Empty(Tree) THEN IF Tree 1 Boot is write accessible THEN (* Collect majority of subtrees to be included with the root of the subtree *) SubTree = su I Majority WriteQuorum(Tree 1.SubTree[i]); IF Unable to collect a majority THEN RETURN(Tree 1.Root u SubTrees); END; (* IF l ) RETURN({}); END; ( IF l ) END WriteQuorum; C. An Example of read and write quorum For the tree in figure 1, a read quorum can be formed by selecting root in the best case. If the root node is not accessible, then the any one child of the root is selected and if that child is also not accessible then the any one child of the selected child is taken, so the possible read quorums may be {1,2,5} or {1,3,8} or {1,3,10} etc. Write quorum is formed by taking root and any single child of the root and any single child of the selected child. So some possible write quorums are {1,4,12}, {1,2,6}, {1,2,5} etc. We can see that there is always a non-empty intersection between read and write quorum of the given tree. IV. PERFORMANCE ANALYSIS In this section, we estimate the message cost and the availability of read and write operations in the proposed node child protocol (NCP) and compare them with the read-one write-all (ROWA), voting (VOTE) protocols and the tree protocol. In this protocol we try to improve the read and write cost to some extent with compare to the other existing protocol, in this protocol we mainly focus on the cost. Availability analysis is done to show that the availability of read and write operations is not decrease in our protocol. Divya Bhaskar and Meenu ijesird, Vol. I (III) September 2014/ 126

Fig. 1 Comparison of read Availability Fig. 3 Comparison of read cost Figure 2. Comparison of write Availability Figure 4. Comparison of write Cost Divya Bhaskar and Meenu ijesird, Vol. I (III) September 2014/ 127

V. CONCLUSION In this paper, we have proposed Node child protocol (NCP) for the management of replicated data in distributed systems. A logical tree structure is imposed on this protocol to decrease operation cost. With NCP a read operation can be carried out by only a single root copy. Read quorum size does not increase with the increment in the number of site failures. Write operation requires root and single child of the root. So, the write operation cost of NCP is lower than all the three mentioned protocols. In both NCP and TQ, root node must be included in write quorum. So, the root node acts as a bottleneck for write operations. There is no need of reconfiguration for NCP in case of site failure and subsequent recovery. The logical structure of tree will be particularly beneficial if it is organized such that most reliable site is chosen as the root and the least reliable sites as the leaves. In this situation, the NCP gives very good performance in failure free environment as well as in failure environment. REFERENCES [1] D. Agrawal and A. EI Abbani The Tree Quorum Protocol: An Efficient Approach for Managing Replicated data, Proc VLDB 1990, pp.243-254. [2] D.Agrawal and A.EI Abbani, Efficient Techniques for Replicated Data Management, proc. Workshop on management of replicated data,1990 pp.48-52 [3] D.K. Gifford, Weighted Voting for Replicated Data. Proc symp on operating system principles, 1979, pp.150-162 [4] R. H. Thomas, A Majority Consensus Approach to Concurrency control for Multiple copy Database, ACM Trans. on database system, Vol.4, No.2. 1979, pp.180-209. [5] M. Chung t and Cailing Cao Multiple Tree Quorum Algorithm for Replica in Distributed Database System. 1992 IEEE [6] Philip A. Bernstein and Nathan Goodman, An Algorithm for Concurrency Control and Recovery in Replicated Distributed Database ACM Transaction on Database System, Vol. 9 No. 4, December 1984. [7] M. Ahamad and M.H. Ammar, Performance Characterization of Quorum-Consensus Algorithms for Replicated Data. 1989 IEEE. [8] D. Agrawal and A.E. Abbani, An Efficient solution to the distributed mutual exclusion problem. ACM syms. On principle of Distributed Computing, pages 193-200, august 198 9. [9] P. A. Bernstein and N. Goodman. A proof technique for concurrency control and recovery algorithms for replicated databases. Diatributed Computing, Springer-Verlag, 2( 1):32-44, January 1987. Divya Bhaskar and Meenu ijesird, Vol. I (III) September 2014/ 128