' There is yet another category of data that are defined and used

Size: px
Start display at page:

Download "' There is yet another category of data that are defined and used"

Transcription

1 Ensuring Data Consistency in Large Network Systems Weimin Du Cisco Systems 250 West Tasman Drive, San Jose, CA , USA Abstract Data management in network systems is different from that in traditional data systems. It is important to ensure data consistency not only between network management systems, but also between network elements, which may not support database protocols such as two-phase commit, as well as between network management systems and network elements. The problem is nontrivial, especially when data are updated unilaterally from network elements without notibing network management systems, as network management systems and network elements operate independently and autonomously. This paper presents a technique that maintains data consistency in large network systems. The technique consists of a suite of protocols that allows creation, deletion, and modification, as well as downloading of data to network elements and verification of data consistency between network management systems and network elements. Together, these protocols ensure consistency when data are not updated directly from network elements, and otherwise detect and resolve inconsistencies as long as updated data have not been used by network elements for configuration. 1 Introduction A data and telecommunication network is a geographically distributed collection of interconnected sub-networks for transporting data between stations. A network consists of a number of network elements (NEs) such as ATM switches. A NE's behavior is defined by its attributes (including its status) that are modeled as managed objects and accessible via network management protocols such as SNMP [l]. A network management system (NMS) is a software system that manages those NEs, e.g., configuration so that they can transport data, service provisioning to customers as defined in service level agreements, and network monitoring to detect and fix problems that may affect customer services. A NMS performs the above tasks by accessing and manipulating NE attributes. From management perspective, a network can be viewed as a collection of data, which can be categorized as follows': User data: Defined and used by NMSs only. Examples include user connection descriptors, which are text descriptions of end-to-end user connections. NEs are not aware of user data. Network data: Also known as managed objects. Examples include NE attributes. They are defined by individual NEs and used by both NE and NMSs. Hybrid data: Defined by NMSs, but are used by NEs. Examples include NE configuration templates (e.g., Service Class Templates for ATM switches2). Hybrid data are always created as user data and later become hybrid data after being copied (or downloaded) to NEs. Hybrid data are normally shared by multiple NEs and stored in both NEs and NMSs. We further distinguish between two different kinds of hybrid data: hybrid user data if they have not been used by NEs, and hybrid network data if they have already been used by at least one NE. For example a service class template is a user data when it was first created. It becomes a hybrid user data after it has been downloaded to ATM switches. It remains as a hybrid user data until ' There is yet another category of data that are defined and used by a network element only. They are not shared between NEs and are invisible to network management systems. There is no consistent issue and are thus not considered in the paper. * Service Class Template is a set of data structures, which map ATM service types (e.g., CBR, ABR) to sets of pre-configured QoS parameters. It allows users to easily configure ATM switches for given QoS requirements without specifying a large number of configuration parameters by reusing the pre-defined service class templates /01 $ IEEE 336

2 the switch has actually used it for configuration, which then becomes a hybrid network data. Note that hybrid data are not network data and no network traps will be generated when they are added, modified, or deleted. But the use of hybrid data for NE configuration will cause network data to be modified and network traps will be generated. In large network systems (e.g., commercial telecommunication networks), network management is distributed. A NMS manages a number of networks and a network is managed by multiple NMSs. The following data inconsistencies are thus possible. User data inconsistency: The same user data may have different values at different NMSs. Existing database technologies (e.g., two-phase commit protocol) can be used to prevent this kind of inconsistency. Network data inconsistencies: The network data in NMSs may not be in sync with those in NEs. Existing network management technologies can be used to prevent this kind of inconsistency. When network data are modified, network traps will be generated (by network elements). NMSs will get the trap and update its copy accordingly. Most existing NMSs (e.g., HP Openview) can keep data in NMSs in sync with actual data in NEs. Hybrid data inconsistency: Hybrid data inconsistency can be introduced when multiple NMSs manipulate the same hybrid data at the same time. For example, one system is downloading the data into a NE while the other is modifying it. Hybrid data inconsistency can also be introduced when users update hybrid data directly from NEs. Since no network traps are generated for hybrid data, NMSs are not aware of the change, resulting in inconsistency between NMSs and NEs. Note that this is not the recommended way of updating hybrid data, but there is nothing preventing users from doing that. It is thus NMS responsibility to prevent or resolve this type of inconsistency. Hybrid data consistency is important in network systems. Since hybrid data are used for NE configuration, inconsistencies may cause NEs to be configured differently, even if they have used the same configuration data. The configuration differences may result in overall network performance degradation, or even service impact. It also gives network operators wrong view of the actual network configuration. Ensuring hybrid data consistency however is not trivial. Database technology cannot be used to prevent this type of inconsistencies even if hybrid data are updated only at a single NMS. The change will be consistently propagated to all NMSs, but not the NEs, as they may not implement two-phase commit protocol. The problem is even more challenging when hybrid data are updated directly from NEs. Since hybrid data are not network data, no network traps will be generated for the change. NMSs may not be aware of the change This paper presents a technique that maintains hybrid data consistency in large network systems. The technique consists of a suite of protocols that allows creation, deletion, modification, and downloading of hybrid data, as well as verification of data consistency between NMSs and NEs. Together, these protocols ensures hybrid data consistency when they are not updated directly from network management, and otherwise detect and resolve inconsistency as long as updated hybrid data have not been used by NEs for configuration. The rest of the paper is organized as follows. First, we discuss the network management framework with focus on data consistency. We then present the basic protocols, followed by enhancement that copes with direct update of hybrid data from NEs. We will also discuss a few related issues and conclude the paper with remarks. 2 Data Management Data management in network system by nature is decentralized. A network can be managed by one or more NMSs and a NMS can manage one or more networks. Thus, data are distributed at NEs and NMSs. More specifically, user data can be stored in multiple NMSs. Network data are stored at a single NE, but can be replicated at multiple NMSs, while hybrid data can be stored at multiple NEs and multiple NMSs. In NMSs, data are normally stored in a database, while in NEs, data can be stored in database, flat files, or some other forms, and can be accessed via network management protocols such as SNMP (Simple Network Management Protocol). Data can be manipulated in following different ways: add, modify, delete, download, unload, association, and upload: Add: Create a new data, e.g., a service class template. 0 Modi& Modify the contents of an existing data. Delete: Delete an existing data from both NMSs and NEs. Upload Copy existing data from NEs to NMSs. 337

3 0 Download: Copy exiting data from NMSs to NEs, e.g., via FTP. Unload: Remove existing data from NEs. Association: Configure NEs using existing data (e.g., Service Class Templates). Note that there is no separate disassociation operation. Configure a NE with new data automatically disassociate it from previous data. The operations may cause data to change its category. The first download operation of a user data implies the transition to hybrid user data. Subsequent download operations (of the same data), however, will not change data category. Last unload operation of a hybrid user data implies the transition back to user data. The first association of a hybrid user data implies the transition to hybrid network data. Subsequent association operations will not change data category. Last disassociation of a hybrid network data implies the transition back to hybrid user data. For example, a service class template, when first created through NMSs, is a user data, which becomes a hybrid user data after being loaded to an ATM switch. It becomes a hybrid network data once it has been used to configure the ATM switch. Upload does not change the category of a data. Figure 1 below shows the transition diagram of hybrid data. 1 1 Add Delete Download I Associate I Last unload Update, Last disassociate Not all operations are applicable to all data. For example, hybrid network data cannot be modified, as it implies changing NE configuration and impacting existing services. Table below defines applicability of above defined operations to user, hybrid, and network data (d: applicable, X : not applicable.) Table 1. Data and operations User Hybrid Hybrid Network User Network Add / x x User data can only be manipulated through NMSs. Network data are normally modeled in some hierarchical structure, such as MIB (Management Information Base), reflecting their containment relationship. Network data are attributes and status of network elements and thus are pre-defined. They cannot be added or deleted from NMSs. But they can be modified from NEs via CLI (command line interface) or from NMSs through network management protocols, such as SNMP. When network data are modified, a trap will be generated by the NE and forwarded to NMSs. Hybrid network data cannot be added, modified, deleted or unloaded. Adding a hybrid network data means that NEs use their private templates for configurations. Since there is no coordination between NEs, inconsistencies between NEs are inevitable. NEs are allowed to do independent configuration using network data, which will not be used for other NE configuration. Modifying, deleting and unloading a hybrid network data imply changing existing NE configurations, which may not be possible without affecting existing services. Note that association (and thus disassociation) of hybrid network data is allowed from both NMSs and NEs. Data manipulation is also decentralized. A user data can be manipulated by multiple NMSs, while a network and a hybrid network data can be manipulated by multiple NMSs and NEs. The basic principle of data management in network systems is network is the master. NMSs always try to reflect the changes made at NEs and keep its data in sync at all the time with those at NEs. Figure 2 below illustrates this basic data management architecture in network management systems. Figure 1. Hybrid data transition diagram 338

4 Network,... NMS. Network, Figure 2. Data management in network systems 3 Preventing Hybrid Data Inconsistencies This section describes basic protocols that ensure hybrid data consistency when they are manipulated from NMSs. We first outline assumptions we made in order for the protocols to work properly. These are all reasonable assumptions that should be followed whenever possible. Some of them will be relaxed in next section for a more robust solution. We then describe the possible inconsistency scenarios followed by detailed descriptions of the protocols. 3.1 Assumptions The main assumption is that hybrid user data can only be added, modified, and deleted through NMSs. Since hybrid user data are not network data, no network traps will be generated for the operation, making it impossible for NMSs to be aware of these changes immediately. Adding and modifying hybrid user data directly from NEs (via CLI) create different versions of data (and thus used for configuration) at different NEs, resulting in inconsistencies not only between NMSs and NEs, but also between different NEs. Unloading hybrid user data from NEs is also undesirable, as it makes it difficult for NMSs to keep track of,data status, which is essential for NMSs to validate data operations (as described in Table 1). This assumption will be relaxed to some extent in next section. The proposed approach is the extension of traditional primary copy approach and follows read-one-write-all data access protocol [2]. Thus, the following implementation assumptions are made without further elaboration. When multiple NMSs manage the same network, one system is identified as primary and all the others are secondary. It is assumed that all NMSs implement an election protocol that ensures one and only one primary exists at any given time among all the NMSs. See, e.g., [SI and 161, for sample protocols. Each NMS also implements a locking protocol, which makes use of the above election protocol and guarantees that at most one NMS can have the lock of a user or hybrid data at anytime. This implies that secondary NMSs may manipulate user and hybrid data only if permission is granted by the primary (via locking protocol). The protocol ensures that user and hybrid user data are manipulated consistently among NMSs. It also ensures that hybrid data are transformed consistently. Note that the above two assumptions simply reflect implementation decisions and can be replaced with other similar ones. 3.2 Protocols Two types of data inconsistency may be introduced when the following operations are performed: Adding/modifying/deleting user data: The operation involves a number of NMSs and may not be successful at all NMSs. Two-phase commit protocol can be used to prevent such inconsistency. No new protocol is needed. Downloading/unloading/associating hybrid network data: The operation involves one NE and a number of NMSs. Successful execution will perform the requested operation at the NE and update the data status at all NMSs correspondingly. Basic two-phase commit protocol could be used, except that it is normally not implemented at NEs. Thus, new protocol is needed at NMSs. Modifiing/deleting hybrid user data: The operation requires modifying/deleting the data at all NEs where they have been downloaded and updating the data status at all NMSs. It thus may involve a number of NEs and a number of NMSs. Again, new protocol is needed. Two protocols have been proposed to prevent the above mentioned data inconsistencies: one for modifying and deleting existing hybrid user data, and the other for downloading, unloading and associating hybrid data to, from, and with NEs, respectively. They are simple extensions to the traditional two-phase commit protocol Data Structures Each protocol must perform constraint check as the first step to make sure that the operation is valid for the data, as defined in Table 1. Constraint checking requires data status information. It is each protocol s responsibility to maintain correct data category information. Three simple data structures have been introduced for each user and hybrid data at each NMS for the purpose. 339

5 An integer variable (state) whose value indicates the category (user, hybrid user, or hybrid network) of the data. An integer variable (download-count) whose value gives the number of NEs the data has been downloaded to. For user data, this count is always zero. An integer variable (associate-count) whose value gives the number of NEs the data has been associated with. For user and hybrid user data, this count is always zero. The download-count of a hybrid data is incremented each time it is downloaded to a new NE and decremented each time it is unloaded from a NE. A hybrid user data becomes a user data when its download-count reaches zero. Similarly, the associare-count of a hybrid network data is incremented each time it is associated with a new NE and decremented each time it is disassociated from a NE. A hybrid network data becomes hybrid user data when its associate-count becomes zero DownloadAJnloadAssociate Existing Hybrid Data The proposed protocol uses two-phase commit protocol to ensure that all NMSs are updated consistently. To cope with the fact that two-phase commit protocol is not implemented at NEs, the basic two-phase commit protocol is extended for NMSs to perform the network operation right after all NMSs are ready to commit but before they actually commit the update. This ensures that data change at NMSs can be rolled back in case network operation failed Obtain the lock; abort the transaction if failed to obtain the lock Update download-count or associate-count of the hybrid data, update the data status accordingly when the count changes to or from zero at all NMSs and prepare to commit; abort the transaction if not successful at all NMSs. Download/unload/associate the hybrid data at NE and wait for response; commit the transaction if the operation is successful, abort otherwise. Modifymelete Existing Hybrid User Data Since NEs normally do not implement two-phase commit protocol, it is generally impossible to always ensure consistent updateldelete at more than one NE. The following protocol is an optimistic one, assuming successful update/delete at all NEs. If it failed at some NEs, the transaction will be aborted and already performed NE operations will be rolled back. Data consistency is ensured if all rollbacks are successful. Otherwise, network operator will be notified to manually resolve the inconsistency. Obtain the lock; abort the transaction if failed. Update/delete the hybrid user data at all NMSs and prepare to commit; abort the transaction if not successful at all NMSs. For each NE with the hybrid user data, update/delete the hybrid user data at the NE; if not successful at all NEs, rollback changes and abort the transaction. If rollback failed, notify network operator. Commit the transaction if not already aborted. Note that although retry is not guaranteed to be successful, the chance of failure is small, as we have already successfully updated/deleted the data at the NE. 4 Detecting and Resolving Data Inconsistencies Unfortunately, data inconsistency is inevitable in network systems, due to following two reasons. First, although it is undesired to manipulate hybrid data directly from NEs, there is no way to prevent users from doing that. Since hybrid data are not network data and no network traps will be generated for the manipulation, it is impossible for NMSs to always keep in sync with NEs. Second, the protocols described in previous section may fail to ensure data consistency. Data inconsistency may be acceptable if it can be detected and resolved fast (e.g., before they are used for NE configuration). In this section, we describe a mechanism that detects and resolves possible data inconsistency. The mechanism keeps track of data inconsistency status by verifying hybrid data in NMSs with that in NEs either periodically or upon user request. When inconsistency is detected, a correction protocol will be invoked to resolve the inconsistency with or without users intervention. We first introduce the concept of data consistency status. The protocols described in previous section that implements hybrid data update/delete will be extended to include data consistency status. We then describe in detail verifying and correction protocols, which, together with the other protocols, ensure overall data consistency. 4.1 Data Consistency Status A hybrid data is in consistency status if, 340

6 It has the same value in all NMSs and all NEs where it has been downloaded to; and It is confirmed by NMS user (e.g., network operators). We say a data is confirmed with users if it was either originated (i.e., added or updated) from NMSs, or originated from NEs but has been approved by NMS users. The condition is important, as data can be addedhpdated directly from NEs. NMSs will eventually detect the change (via verifying protocol) and upload the data to NMSs. But the NMS users must confirm that this data is valid and can be used by all other NEs. NE users (who updated the data) may not have the global view and it is NMS users responsibility to ensure data consistency for all NMSs and NEs. For each hybrid data, two flags are introduced to indicate its consistency status. 0 RU (of type Boolean) indicates if the data at NMSs is confirmed with NMS user (e.g., network operator). It is set to TRUE when the data is added or modified through NMSs. It is set to FALSE when the data is added or modified through NE and has yet to be confirmed by users. RN indicates if the data at NMSs is in sync with that of NEs. It is an array of Boolean flags, one for each NE, plus a summary flag. The flag for NE n, denoted as RN[n], is set to TRUE if the data is in sync with that of n, and set to FALSE otherwise. The overall flag, denoted as RN, is defined as RN[nl] & RN[n2] &... & RN[n,]. Given a hybrid data d, we say that d is consistent with users if d.ru=true; consistent with NEs if d.rn=true; and consistent (with both users and NEs) if d.ru=true and d.rn=true. 4.2 Extended Protocol for ModifyingDeleting Hybrid User Data When NMS successfully updated/deleted a hybrid user data and propagated changes to all NEs, RU and RN flags should both be set to TRUE. There are however cases where the protocol failed to propagate the change to all NEs. RN flags of these NEs should be set accordingly, so that verifying protocol (see next) will be able to distinguish it from cases where data is updated from NE but not propagated to NMSs. Below is the extended version of the protocol that sets RU and RN flags properly. For the given hybrid data d (to update/delete): Obtain the lock; abort the transaction if failed. 0 Updateldelete the hybrid user data at all NMSs and prepare to commit; abort the transaction if not successful at all NMSs. Otherwise, set d.ru=true and d.rn=false for all NEs. For each NE n with the hybrid user data, update/delete the hybrid user data at the NE. If not successful at all NEs, rollback changes and abort the transaction. If rollback failed, notify network operator and set d.rn[n]=true. If rollback is successful, restore both d.ru and d.rn[n] (to their original values). Commit the transaction if not already aborted and set d.rn=true for all NEs. RN flags are initially set to FALSE for all NEs, indicating that the change has not been propagated to any NE (second step). If everything goes ok, RN flags will be set to TRUE for all NEs (last step). Otherwise, we set RN flag to TRUE for those NEs where the change has been previously propagated to but later failed in rollback when the transaction is aborted (third step). 4.3 Verifying Protocol The purpose of verifying protocol is to update data consistency status by evaluating RU and RN values of each hybrid data. The value will be used later by correction protocol in determining if there is data inconsistency and how it can be resolved. For each NE n, upload all hybrid data; for each uploaded data d, if: d exists in NMSs and has same value: set d.ru=true and d.rn[ n]=true. 0 d exists in NMSs but has different values: if d.ru=true and d.rn[n]=false, do nothing. Otherwise, set d.ru=false and d.rn[n]=true. 0 d does not exist in NMSs: add the data (to NMSs) and set d.ru=false and d.rn[n]=true. The first case is the ideal one, where no data inconsistency is detected. This indicates that data added or updated through NMS has been successfully propagated to all NMSs and NEs. There are two possible reasons for data inconsistency in the second case: data updated from NMS has not been propagated to the NE, or data updated from NE has not been propagated to NMSs. If the former is the case, NMS should have set RU to TRUE and RN to FALSE (see Section 4.2). There is no need to update consistency status. Otherwise (for the later), RU will be set tb FALSE, indicating that user s view of the data is obsolete. 34 1

7 There is only one reason for the inconsistency in the last case: data added directly from NEs. Note that the delete protocol (Section 4.2) will never cause data being deleted from NMSs but not NEs. If it failed to delete the data from some NEs, the transaction will be aborted, leaving data undeleted at all NMSs. It is possible, however, that it failed to rollback delete operation at some NEs. Thus, RU will be set to FALSE, indicating that the data has yet to be confirmed by users. 4.4 Correction Protocol There are four different combinations of RU and RN values. If both RU and RN are TRUE, indicating no data inconsistency, there is nothing to be done. FALSE value of both RU and RN indicates the data is originated from neither NEs nor NMSs, an situation that should never happen, as NMSs never initiate data change by its own. An error should be logged for this. The main purpose of correction protocol is to deal with the other two cases where RU and RN have different values. FALSE RN value indicates that data at NEs are obsolete. This happens when modifying/deleing protocol failed to propagate the change to all NEs and also failed to rollback the transaction (see Section 4.2). Correction protocol will automatically fix the problem by downloading the data to the NEs. FALSE RU value, set by verifying protocol, indicates data updated from NE were not propagated to NMSs. NMS user needs to approve the change before making it available to all other NEs. When the change is not approved, the users may or may not provide with new changes. When new change is provided, the correction protocol will propagate the change to all NMSs and NEs (via modifying protocol). For each hybrid data d: If d.ru=false and d.rn=true: validate the data with the user. If confirmed without change, set d.ru=true. If confirmed with change, invoke modifying protocol (Section 4.2, d.ru and d.rn have been set to TRUE if modifying was successful). If not confirmed, log error (and leave d.ru and d. RN unchanged). If d.ru=true and d.rn=false: for each NE n such that d.rn[n]=false, invoke download protocol (Section 3.2.2) to download data to n (d.rn[n] has been set to TRUE if download was successful). 5 Discussions 5.1 Related Work The key issue of data management in network management system is replication control. Much research has been done in the area. A popular solution is to use a replication protocol (such as read-one-write-all [2]) to ensure that each user has the same view, a concurrency control protocol (e.g., two-phase locking) that prevents users from seeing inconsistent view resulted from partial executions of other users, and a commit protocol (e.g., two-phase commit [2]) to ensure complete execution (i.e., all-or-nothing). Together, the three protocols ensure that each user sees the same consistent view. The main difference of data management in network management systems is that data may be replicated at both NMSs and NEs but only NMSs have database support such as two-phase commit. As a result, transactions may be partially executed (i.e., successful at some NEs but not the others). The paper extends the above-mentioned schema to deal with the problems. More specifically, it extends two-phase commit protocol of NMSs to ensure that transactions are committed at NMSs only when the changes have been successfully propagated to all NEs. Another major issue is that data may be unilaterally added/updated/deleted at one site. The problem is actually not unique to network management. For example, in heterogeneous (or federated) database systems, local database systems may unilaterally add/update/delete data residing at the site. There are researches addressing the problem, e.g., 131 and [4]. The basic idea of these solutions is to allow such unilateral data manipulation by exposing limitations on data accessibility. The technique however does not work in network management systems. NEs may add/update/delete data without informing other NEs and NMSs, due to the lack of replication control protocol at NEs. A new verifying and correction protocol has been introduced for NMSs that detect and resolve inconsistencies resulted from such unilateral data manipulations. 5.2 Implementation Issues The presented technique has been partially implemented in Cisco Wan Manager (CWM) product to ensure nonnetwork data consistency. CWM is a distributed network management system managing all types of Cisco WAN switches including BPX series, IPX series, and MGX series. CWM has been in production for years. Typical 342

8 customers include telco companies and large enterprises, and network consists of hundreds of ATM and Frame Relay switches supporting up to a million user connections (PVCs or SPVCs). The network is normally managed by multiple CWMs, which provide services such as configuration, provisioning, monitoring, and statistic data collection. CWM guarantees that the same consistent network view is presented at all CWM systems for both network and non-network data. Two-phase commit protocol has been used in the presented technique to ensure data consistency between CWM systems. It is theoretically sound, but present difficulties in real implementation when the number of CWM systems is large (e.g., > lo), as they are normally geographically dispersed. Temporary unavailability of one CWM system may block all other system in service provisioning, a situation not acceptable to most of our customers. We have thus implemented a variation of the presented approach which only runs two-phase commit protocol between the provision system and a designated primary system. An asynchronous broadcast message is used to inform all other CWM systems. The basic idea is to minimize the number of CWM systems involved in twophase commit protocol, thus minimize the possibility of system unavailability, but at the same time make sure that at least one CWM system (the primary) has complete and up to date information of all data. A nonprimary CWM system can thus always sync-up with the primary when needed (e.g., after a network failure). Currently, the protocol is only used to manage a small number of large hybrid data (e.g., service class templates). Performance is thus not a major concern, and the additional overhead of verifying and correction protocols is acceptable. This may become an issue when managing large number of small hybrid data. More efficient verifying and correction protocols may be needed. 6 Conclusions In this paper, we have presented a technique that maintains data consistency in large network systems. The technique consists of a suite of protocols that allows creation, deletion, and modification, as well as downloading of data to network elements and verification of data consistency between network management systems and network elements. Together, these protocols ensure consistency when data are not updated directly from network elements, and otherwise detect and resolve inconsistencies as long as updated data have not been used by network elements for configuration. The presented approach has the following features: 0 It allows management of all four categories of data. 0 It allows a network to be managed by multiple NMSs. It allows a NMS to manage multiple networks. It ensures network-network, network-system and system-system consistency. 0 It detects and resolves possible inconsistencies between the network and the management systems. The presented technique has been partially implemented in Cisco WAN Manager and is currently in production. 7 References [l] D. Zeltserman, A Practical Guide to SNMPv3 and Network Management, Prentice Hall, 1999 [2] P. Bemstein, V. Hadzilacos, & N. Goodman, Concurrency Control and Recovery in Database Systems, Addison- Wesley Publishing Co., 1987 [3] W. Du, A. Elamargmid, W. Kim, & 0. Bukhres, Supporting Consistent Updates in Replicated Multidatabase Systems, Intemational Journal on Very Large Data Bases, 2(2), 1993 [4] J. Jing, W. Du, A. Elmagarmid & 0. Bukhres, Maintaining Consistency of Replicated Data in Multidatabase Systems, Proc. Of 14'h International Conf. On Distributed Computing Systems, Poland, 1994 [5] H. Garcia-Molina, Election in a Distributed Computing System, IEEE Transaction on Computing, 1982 [6] J. Kim & G. Belford, A Distributed Election Protocolfor Unreliable Networks, Journal of Parallel and Distributed Computing,

Mobile Agent Model for Transaction Processing in Distributed Database Systems

Mobile Agent Model for Transaction Processing in Distributed Database Systems Mobile gent Model for Transaction Processing in Distributed Database Systems Takao Komiya, Hiroyuki Ohshida, and Makoto Takizawa Tokyo Denki University E-mail {komi,ohsida,taki}@takilab.k.dendai.ac.jp

More information

Synchronization. Chapter 5

Synchronization. Chapter 5 Synchronization Chapter 5 Clock Synchronization In a centralized system time is unambiguous. (each computer has its own clock) In a distributed system achieving agreement on time is not trivial. (it is

More information

Software Architecture

Software Architecture CHAPTER 5 Overview The on the MGX 8250 is comprised of three major functional blocks: Platform control Configuration and monitoring services Node management To deliver these services, the is partitioned

More information

CSE 444: Database Internals. Section 9: 2-Phase Commit and Replication

CSE 444: Database Internals. Section 9: 2-Phase Commit and Replication CSE 444: Database Internals Section 9: 2-Phase Commit and Replication 1 Today 2-Phase Commit Replication 2 Two-Phase Commit Protocol (2PC) One coordinator and many subordinates Phase 1: Prepare Phase 2:

More information

Background. 20: Distributed File Systems. DFS Structure. Naming and Transparency. Naming Structures. Naming Schemes Three Main Approaches

Background. 20: Distributed File Systems. DFS Structure. Naming and Transparency. Naming Structures. Naming Schemes Three Main Approaches Background 20: Distributed File Systems Last Modified: 12/4/2002 9:26:20 PM Distributed file system (DFS) a distributed implementation of the classical time-sharing model of a file system, where multiple

More information

11/7/2018. Event Ordering. Module 18: Distributed Coordination. Distributed Mutual Exclusion (DME) Implementation of. DME: Centralized Approach

11/7/2018. Event Ordering. Module 18: Distributed Coordination. Distributed Mutual Exclusion (DME) Implementation of. DME: Centralized Approach Module 18: Distributed Coordination Event Ordering Event Ordering Mutual Exclusion Atomicity Concurrency Control Deadlock Handling Election Algorithms Reaching Agreement Happened-before relation (denoted

More information

Distributed KIDS Labs 1

Distributed KIDS Labs 1 Distributed Databases @ KIDS Labs 1 Distributed Database System A distributed database system consists of loosely coupled sites that share no physical component Appears to user as a single system Database

More information

Chapter 16: Distributed Synchronization

Chapter 16: Distributed Synchronization Chapter 16: Distributed Synchronization Chapter 16 Distributed Synchronization Event Ordering Mutual Exclusion Atomicity Concurrency Control Deadlock Handling Election Algorithms Reaching Agreement 18.2

More information

Synchronization. Clock Synchronization

Synchronization. Clock Synchronization Synchronization Clock Synchronization Logical clocks Global state Election algorithms Mutual exclusion Distributed transactions 1 Clock Synchronization Time is counted based on tick Time judged by query

More information

MarkLogic Server. Database Replication Guide. MarkLogic 6 September, Copyright 2012 MarkLogic Corporation. All rights reserved.

MarkLogic Server. Database Replication Guide. MarkLogic 6 September, Copyright 2012 MarkLogic Corporation. All rights reserved. Database Replication Guide 1 MarkLogic 6 September, 2012 Last Revised: 6.0-1, September, 2012 Copyright 2012 MarkLogic Corporation. All rights reserved. Database Replication Guide 1.0 Database Replication

More information

Chapter 18: Distributed

Chapter 18: Distributed Chapter 18: Distributed Synchronization, Silberschatz, Galvin and Gagne 2009 Chapter 18: Distributed Synchronization Event Ordering Mutual Exclusion Atomicity Concurrency Control Deadlock Handling Election

More information

Ecient Redo Processing in. Jun-Lin Lin. Xi Li. Southern Methodist University

Ecient Redo Processing in. Jun-Lin Lin. Xi Li. Southern Methodist University Technical Report 96-CSE-13 Ecient Redo Processing in Main Memory Databases by Jun-Lin Lin Margaret H. Dunham Xi Li Department of Computer Science and Engineering Southern Methodist University Dallas, Texas

More information

INTEGRATED MANAGEMENT OF LARGE SATELLITE-TERRESTRIAL NETWORKS' ABSTRACT

INTEGRATED MANAGEMENT OF LARGE SATELLITE-TERRESTRIAL NETWORKS' ABSTRACT INTEGRATED MANAGEMENT OF LARGE SATELLITE-TERRESTRIAL NETWORKS' J. S. Baras, M. Ball, N. Roussopoulos, K. Jang, K. Stathatos, J. Valluri Center for Satellite and Hybrid Communication Networks Institute

More information

Distributed Databases Systems

Distributed Databases Systems Distributed Databases Systems Lecture No. 07 Concurrency Control Naeem Ahmed Email: naeemmahoto@gmail.com Department of Software Engineering Mehran Univeristy of Engineering and Technology Jamshoro Outline

More information

Everything You Need to Know About MySQL Group Replication

Everything You Need to Know About MySQL Group Replication Everything You Need to Know About MySQL Group Replication Luís Soares (luis.soares@oracle.com) Principal Software Engineer, MySQL Replication Lead Copyright 2017, Oracle and/or its affiliates. All rights

More information

CA464 Distributed Programming

CA464 Distributed Programming 1 / 25 CA464 Distributed Programming Lecturer: Martin Crane Office: L2.51 Phone: 8974 Email: martin.crane@computing.dcu.ie WWW: http://www.computing.dcu.ie/ mcrane Course Page: "/CA464NewUpdate Textbook

More information

CS /15/16. Paul Krzyzanowski 1. Question 1. Distributed Systems 2016 Exam 2 Review. Question 3. Question 2. Question 5.

CS /15/16. Paul Krzyzanowski 1. Question 1. Distributed Systems 2016 Exam 2 Review. Question 3. Question 2. Question 5. Question 1 What makes a message unstable? How does an unstable message become stable? Distributed Systems 2016 Exam 2 Review Paul Krzyzanowski Rutgers University Fall 2016 In virtual sychrony, a message

More information

The Discovery Wizard now provides the ability to create SNMP Setups that can be selected for individual discoveries. An SNMP Setup specifies:

The Discovery Wizard now provides the ability to create SNMP Setups that can be selected for individual discoveries. An SNMP Setup specifies: Using Discovery Using Discovery Open the Discovery application by clicking Discovery in the Task Bar, selecting Discovery from the Applications menu, or by clicking the Discovery icon in the Topology Toolbar.

More information

DISTRIBUTED SYSTEMS Principles and Paradigms Second Edition ANDREW S. TANENBAUM MAARTEN VAN STEEN. Chapter 1. Introduction

DISTRIBUTED SYSTEMS Principles and Paradigms Second Edition ANDREW S. TANENBAUM MAARTEN VAN STEEN. Chapter 1. Introduction DISTRIBUTED SYSTEMS Principles and Paradigms Second Edition ANDREW S. TANENBAUM MAARTEN VAN STEEN Chapter 1 Introduction Modified by: Dr. Ramzi Saifan Definition of a Distributed System (1) A distributed

More information

Distributed Systems Principles and Paradigms. Chapter 01: Introduction

Distributed Systems Principles and Paradigms. Chapter 01: Introduction Distributed Systems Principles and Paradigms Maarten van Steen VU Amsterdam, Dept. Computer Science Room R4.20, steen@cs.vu.nl Chapter 01: Introduction Version: October 25, 2009 2 / 26 Contents Chapter

More information

Distributed Systems COMP 212. Revision 2 Othon Michail

Distributed Systems COMP 212. Revision 2 Othon Michail Distributed Systems COMP 212 Revision 2 Othon Michail Synchronisation 2/55 How would Lamport s algorithm synchronise the clocks in the following scenario? 3/55 How would Lamport s algorithm synchronise

More information

OmniVista 3.5 Discovery Help

OmniVista 3.5 Discovery Help Using Discovery Open the Discovery application by clicking Discovery in the Task Bar, selecting Discovery from the Applications menu, or by clicking the Discovery icon in the Topology Toolbar. The Discovery

More information

Chapter 9: Concurrency Control

Chapter 9: Concurrency Control Chapter 9: Concurrency Control Concurrency, Conflicts, and Schedules Locking Based Algorithms Timestamp Ordering Algorithms Deadlock Management Acknowledgements: I am indebted to Arturas Mazeika for providing

More information

Routing Between VLANs Overview

Routing Between VLANs Overview Routing Between VLANs Overview This chapter provides an overview of VLANs. It describes the encapsulation protocols used for routing between VLANs and provides some basic information about designing VLANs.

More information

Distributed Systems Principles and Paradigms. Chapter 01: Introduction. Contents. Distributed System: Definition.

Distributed Systems Principles and Paradigms. Chapter 01: Introduction. Contents. Distributed System: Definition. Distributed Systems Principles and Paradigms Maarten van Steen VU Amsterdam, Dept. Computer Science Room R4.20, steen@cs.vu.nl Chapter 01: Version: February 21, 2011 1 / 26 Contents Chapter 01: 02: Architectures

More information

OPERATING SYSTEM. Chapter 12: File System Implementation

OPERATING SYSTEM. Chapter 12: File System Implementation OPERATING SYSTEM Chapter 12: File System Implementation Chapter 12: File System Implementation File-System Structure File-System Implementation Directory Implementation Allocation Methods Free-Space Management

More information

Advanced Databases Lecture 17- Distributed Databases (continued)

Advanced Databases Lecture 17- Distributed Databases (continued) Advanced Databases Lecture 17- Distributed Databases (continued) Masood Niazi Torshiz Islamic Azad University- Mashhad Branch www.mniazi.ir Alternative Models of Transaction Processing Notion of a single

More information

Chapter 19: Distributed Databases

Chapter 19: Distributed Databases Chapter 19: Distributed Databases Database System Concepts, 6 th Ed. See www.db-book.com for conditions on re-use Chapter 19: Distributed Databases Heterogeneous and Homogeneous Databases Distributed Data

More information

Distributed Systems Principles and Paradigms

Distributed Systems Principles and Paradigms Distributed Systems Principles and Paradigms Chapter 01 (version September 5, 2007) Maarten van Steen Vrije Universiteit Amsterdam, Faculty of Science Dept. Mathematics and Computer Science Room R4.20.

More information

Chapter 11: Implementing File Systems

Chapter 11: Implementing File Systems Chapter 11: Implementing File Systems Operating System Concepts 99h Edition DM510-14 Chapter 11: Implementing File Systems File-System Structure File-System Implementation Directory Implementation Allocation

More information

Distributed File Systems. CS432: Distributed Systems Spring 2017

Distributed File Systems. CS432: Distributed Systems Spring 2017 Distributed File Systems Reading Chapter 12 (12.1-12.4) [Coulouris 11] Chapter 11 [Tanenbaum 06] Section 4.3, Modern Operating Systems, Fourth Ed., Andrew S. Tanenbaum Section 11.4, Operating Systems Concept,

More information

About This Manual. Objectives. Audience

About This Manual. Objectives. Audience About This Manual Welcome to the command line interface (CLI) reference for the Cisco MGX 8950 and MGX 8850 wide area routing switches, Release 2.1. This chapter discusses: Objectives Audience Organization

More information

An Analysis and Improvement of Probe-Based Algorithm for Distributed Deadlock Detection

An Analysis and Improvement of Probe-Based Algorithm for Distributed Deadlock Detection An Analysis and Improvement of Probe-Based Algorithm for Distributed Deadlock Detection Kunal Chakma, Anupam Jamatia, and Tribid Debbarma Abstract In this paper we have performed an analysis of existing

More information

Deadlock Managing Process in P2P System

Deadlock Managing Process in P2P System Deadlock Managing Process in P2P System Akshaya A.Bhosale Department of Information Technology Gharda Institute Of Technology,Lavel, Chiplun,Maharashtra, India Ashwini B.Shinde Department of Information

More information

Silberschatz and Galvin Chapter 18

Silberschatz and Galvin Chapter 18 Silberschatz and Galvin Chapter 18 Distributed Coordination CPSC 410--Richard Furuta 4/21/99 1 Distributed Coordination Synchronization in a distributed environment Ð Event ordering Ð Mutual exclusion

More information

CSE 530A ACID. Washington University Fall 2013

CSE 530A ACID. Washington University Fall 2013 CSE 530A ACID Washington University Fall 2013 Concurrency Enterprise-scale DBMSs are designed to host multiple databases and handle multiple concurrent connections Transactions are designed to enable Data

More information

Using VERITAS Volume Replicator for Disaster Recovery of a SQL Server Application Note

Using VERITAS Volume Replicator for Disaster Recovery of a SQL Server Application Note Using VERITAS Volume Replicator for Disaster Recovery of a SQL Server Application Note February 2002 30-000632-011 Disclaimer The information contained in this publication is subject to change without

More information

Managing Workflows. Starting Prime Network Administration CHAPTER

Managing Workflows. Starting Prime Network Administration CHAPTER CHAPTER 10 Prime Network Administration can be used to manage deployed workflow templates. Topics include: Starting Prime Network Administration, page 10-1 Viewing the List of Templates and Template Properties,

More information

CS 347: Distributed Databases and Transaction Processing Notes07: Reliable Distributed Database Management

CS 347: Distributed Databases and Transaction Processing Notes07: Reliable Distributed Database Management CS 347: Distributed Databases and Transaction Processing Notes07: Reliable Distributed Database Management Hector Garcia-Molina CS 347 Notes07 1 Reliable distributed database management Reliability Failure

More information

MPLS VPN MIB Support. Cisco IOS Release 12.0(24)S1 1

MPLS VPN MIB Support. Cisco IOS Release 12.0(24)S1 1 MPLS VPN MIB Support This document describes the Simple Network Management Protocol (SNMP) agent support in Cisco IOS for Multiprotocol Label Switching (MPLS) Virtual Private Network (VPN) management,

More information

A Case Study of Agreement Problems in Distributed Systems : Non-Blocking Atomic Commitment

A Case Study of Agreement Problems in Distributed Systems : Non-Blocking Atomic Commitment A Case Study of Agreement Problems in Distributed Systems : Non-Blocking Atomic Commitment Michel RAYNAL IRISA, Campus de Beaulieu 35042 Rennes Cedex (France) raynal @irisa.fr Abstract This paper considers

More information

SNMP Agent Setup. Simple Network Management Protocol Support. SNMP Basics

SNMP Agent Setup. Simple Network Management Protocol Support. SNMP Basics Simple Network Management Protocol Support, page 1 SNMP Basics, page 1 SNMP Management Information Base (MIB), page 2 Set Up SNMP, page 3 Import Previously Configured Windows SNMP v1 Community Strings,

More information

An Implementation of the OASIS Business Transaction Protocol on the CORBA Activity Service. By Adam D. Barker. MSc SDIA, September 2002

An Implementation of the OASIS Business Transaction Protocol on the CORBA Activity Service. By Adam D. Barker. MSc SDIA, September 2002 An Implementation of the OASIS Business Transaction Protocol on the CORBA Activity Service By Adam D. Barker MSc SDIA, September 2002 Supervisor: Dr. Mark Little 1 CONTENTS CONTENTS...3 TABLE OF FIGURES...6

More information

Correctness Criteria Beyond Serializability

Correctness Criteria Beyond Serializability Correctness Criteria Beyond Serializability Mourad Ouzzani Cyber Center, Purdue University http://www.cs.purdue.edu/homes/mourad/ Brahim Medjahed Department of Computer & Information Science, The University

More information

Distributed Systems. Chapter 1: Introduction

Distributed Systems. Chapter 1: Introduction Distributed Systems (3rd Edition) Chapter 1: Introduction Version: February 25, 2017 2/56 Introduction: What is a distributed system? Distributed System Definition A distributed system is a collection

More information

Distributed Systems Exam 1 Review. Paul Krzyzanowski. Rutgers University. Fall 2016

Distributed Systems Exam 1 Review. Paul Krzyzanowski. Rutgers University. Fall 2016 Distributed Systems 2016 Exam 1 Review Paul Krzyzanowski Rutgers University Fall 2016 Question 1 Why does it not make sense to use TCP (Transmission Control Protocol) for the Network Time Protocol (NTP)?

More information

Chapter 12: File System Implementation

Chapter 12: File System Implementation Chapter 12: File System Implementation Chapter 12: File System Implementation File-System Structure File-System Implementation Directory Implementation Allocation Methods Free-Space Management Efficiency

More information

INF-5360 Presentation

INF-5360 Presentation INF-5360 Presentation Optimistic Replication Ali Ahmad April 29, 2013 Structure of presentation Pessimistic and optimistic replication Elements of Optimistic replication Eventual consistency Scheduling

More information

Transaction Processing in Mobile Database Systems

Transaction Processing in Mobile Database Systems Ashish Jain* 1 http://dx.doi.org/10.18090/samriddhi.v7i2.8631 ABSTRACT In a mobile computing environment, a potentially large number of mobile and fixed users may simultaneously access shared data; therefore,

More information

MarkLogic Server. Database Replication Guide. MarkLogic 9 May, Copyright 2017 MarkLogic Corporation. All rights reserved.

MarkLogic Server. Database Replication Guide. MarkLogic 9 May, Copyright 2017 MarkLogic Corporation. All rights reserved. Database Replication Guide 1 MarkLogic 9 May, 2017 Last Revised: 9.0-3, September, 2017 Copyright 2017 MarkLogic Corporation. All rights reserved. Table of Contents Table of Contents Database Replication

More information

Database Architectures

Database Architectures Database Architectures CPS352: Database Systems Simon Miner Gordon College Last Revised: 4/15/15 Agenda Check-in Parallelism and Distributed Databases Technology Research Project Introduction to NoSQL

More information

Adapting Commit Protocols for Large-Scale and Dynamic Distributed Applications

Adapting Commit Protocols for Large-Scale and Dynamic Distributed Applications Adapting Commit Protocols for Large-Scale and Dynamic Distributed Applications Pawel Jurczyk and Li Xiong Emory University, Atlanta GA 30322, USA {pjurczy,lxiong}@emory.edu Abstract. The continued advances

More information

Synchronization Part II. CS403/534 Distributed Systems Erkay Savas Sabanci University

Synchronization Part II. CS403/534 Distributed Systems Erkay Savas Sabanci University Synchronization Part II CS403/534 Distributed Systems Erkay Savas Sabanci University 1 Election Algorithms Issue: Many distributed algorithms require that one process act as a coordinator (initiator, etc).

More information

Chapter 10: File System Implementation

Chapter 10: File System Implementation Chapter 10: File System Implementation Chapter 10: File System Implementation File-System Structure" File-System Implementation " Directory Implementation" Allocation Methods" Free-Space Management " Efficiency

More information

FlowBack: Providing Backward Recovery for Workflow Management Systems

FlowBack: Providing Backward Recovery for Workflow Management Systems FlowBack: Providing Backward Recovery for Workflow Management Systems Bartek Kiepuszewski, Ralf Muhlberger, Maria E. Orlowska Distributed Systems Technology Centre Distributed Databases Unit ABSTRACT The

More information

Distributed Systems. Overview. Distributed Systems September A distributed system is a piece of software that ensures that:

Distributed Systems. Overview. Distributed Systems September A distributed system is a piece of software that ensures that: Distributed Systems Overview Distributed Systems September 2002 1 Distributed System: Definition A distributed system is a piece of software that ensures that: A collection of independent computers that

More information

Datacenter replication solution with quasardb

Datacenter replication solution with quasardb Datacenter replication solution with quasardb Technical positioning paper April 2017 Release v1.3 www.quasardb.net Contact: sales@quasardb.net Quasardb A datacenter survival guide quasardb INTRODUCTION

More information

action (event) through rising (test threshold)

action (event) through rising (test threshold) action (event) through rising (test threshold) action (event), page 3 add (bulk statistics object), page 5 bandwidth (interface configuration), page 7 buffer-size (bulk statistics), page 10 comparison,

More information

Mobile and Heterogeneous databases Distributed Database System Transaction Management. A.R. Hurson Computer Science Missouri Science & Technology

Mobile and Heterogeneous databases Distributed Database System Transaction Management. A.R. Hurson Computer Science Missouri Science & Technology Mobile and Heterogeneous databases Distributed Database System Transaction Management A.R. Hurson Computer Science Missouri Science & Technology 1 Distributed Database System Note, this unit will be covered

More information

Migration From Automatic Routing Management to PNNI Using Cisco WAN Manager

Migration From Automatic Routing Management to PNNI Using Cisco WAN Manager CHAPTER 11 Migration From Automatic Routing Management to PNNI Using Cisco WAN Manager This chapter provides details on the AutoRoute to PNNI migration process, including upgrading switch software, and

More information

Chapter 17: Distributed-File Systems. Operating System Concepts 8 th Edition,

Chapter 17: Distributed-File Systems. Operating System Concepts 8 th Edition, Chapter 17: Distributed-File Systems, Silberschatz, Galvin and Gagne 2009 Chapter 17 Distributed-File Systems Background Naming and Transparency Remote File Access Stateful versus Stateless Service File

More information

Overview of Disaster Recovery for Transaction Processing Systems

Overview of Disaster Recovery for Transaction Processing Systems Overview of Disaster Recovery for Transaction Processing Systems Richard P. King Nagui Halim IBM T. J. Watson Research Center P.O. Box 704 Yorktown Heights, NY 10598 Hector Garcia-Molina? Christos A. Polyzoisi

More information

Module 8 Fault Tolerance CS655! 8-1!

Module 8 Fault Tolerance CS655! 8-1! Module 8 Fault Tolerance CS655! 8-1! Module 8 - Fault Tolerance CS655! 8-2! Dependability Reliability! A measure of success with which a system conforms to some authoritative specification of its behavior.!

More information

SNMP CEF-MIB Support

SNMP CEF-MIB Support SNMP CEF-MIB Support Last Updated: October 5, 2011 The Cisco Express Forwarding--SNMP CEF-MIB Support feature introduces the CISCO-CEF-MIB, which allows management applications through the use of the Simple

More information

Chapter 11: Implementing File

Chapter 11: Implementing File Chapter 11: Implementing File Systems Chapter 11: Implementing File Systems File-System Structure File-System Implementation Directory Implementation Allocation Methods Free-Space Management Efficiency

More information

Real-time Optimistic Concurrency Control based on Transaction Finish Degree

Real-time Optimistic Concurrency Control based on Transaction Finish Degree Journal of Computer Science 1 (4): 471-476, 2005 ISSN 1549-3636 Science Publications, 2005 Real-time Optimistic Concurrency Control based on Transaction Finish Degree 1 Han Qilong, 1,2 Hao Zhongxiao 1

More information

E-Commerce with Rich Clients and Flexible Transactions

E-Commerce with Rich Clients and Flexible Transactions E-Commerce with Rich Clients and Flexible Transactions Dylan Clarke, Graham Morgan School of Computing Science, Newcastle University {Dylan.Clarke,Graham.Morgan}@ncl.ac.uk Abstract In this paper we describe

More information

DB2 Lecture 10 Concurrency Control

DB2 Lecture 10 Concurrency Control DB2 Lecture 10 Control Jacob Aae Mikkelsen November 28, 2012 1 / 71 Jacob Aae Mikkelsen DB2 Lecture 10 Control ACID Properties Properly implemented transactions are commonly said to meet the ACID test,

More information

Configuring RTP Header Compression

Configuring RTP Header Compression Configuring RTP Header Compression First Published: January 30, 2006 Last Updated: July 23, 2010 Header compression is a mechanism that compresses the IP header in a packet before the packet is transmitted.

More information

ECS High Availability Design

ECS High Availability Design ECS High Availability Design March 2018 A Dell EMC white paper Revisions Date Mar 2018 Aug 2017 July 2017 Description Version 1.2 - Updated to include ECS version 3.2 content Version 1.1 - Updated to include

More information

Finding Support Information for Platforms and Cisco IOS Software Images

Finding Support Information for Platforms and Cisco IOS Software Images First Published: June 19, 2006 Last Updated: June 19, 2006 The Cisco Networking Services () feature is a collection of services that can provide remote event-driven configuring of Cisco IOS networking

More information

Database Architectures

Database Architectures Database Architectures CPS352: Database Systems Simon Miner Gordon College Last Revised: 11/15/12 Agenda Check-in Centralized and Client-Server Models Parallelism Distributed Databases Homework 6 Check-in

More information

Chapter 11: Implementing File Systems. Operating System Concepts 9 9h Edition

Chapter 11: Implementing File Systems. Operating System Concepts 9 9h Edition Chapter 11: Implementing File Systems Operating System Concepts 9 9h Edition Silberschatz, Galvin and Gagne 2013 Chapter 11: Implementing File Systems File-System Structure File-System Implementation Directory

More information

Data Modeling and Databases Ch 14: Data Replication. Gustavo Alonso, Ce Zhang Systems Group Department of Computer Science ETH Zürich

Data Modeling and Databases Ch 14: Data Replication. Gustavo Alonso, Ce Zhang Systems Group Department of Computer Science ETH Zürich Data Modeling and Databases Ch 14: Data Replication Gustavo Alonso, Ce Zhang Systems Group Department of Computer Science ETH Zürich Database Replication What is database replication The advantages of

More information

Transactional Agents for Pervasive Computing

Transactional Agents for Pervasive Computing Transactional Agents for Pervasive Computing Machigar Ongtang Dept. of Computer Science and Engineering, Pennsylvania State University ongtang@cse.psu.edu Ali R. Hurson Computer Science Dept., Missouri

More information

Configuration Replace and Configuration Rollback

Configuration Replace and Configuration Rollback Configuration Replace and Configuration Rollback Last Updated: November 29, 2011 The Configuration Replace and Configuration Rollback feature provides the capability to replace the current running configuration

More information

Transaction Processing in a Mobile Computing Environment with Alternating Client Hosts *

Transaction Processing in a Mobile Computing Environment with Alternating Client Hosts * Transaction Processing in a Mobile Computing Environment with Alternating Client Hosts * Sven Buchholz, Thomas Ziegert and Alexander Schill Department of Computer Science Dresden University of Technology

More information

OMS Connect : Supporting Multidatabase and Mobile Working through Database Connectivity

OMS Connect : Supporting Multidatabase and Mobile Working through Database Connectivity OMS Connect : Supporting Multidatabase and Mobile Working through Database Connectivity Moira C. Norrie, Alexios Palinginis and Alain Würgler Institute for Information Systems ETH Zurich, CH-8092 Zurich,

More information

Specifying and Proving Broadcast Properties with TLA

Specifying and Proving Broadcast Properties with TLA Specifying and Proving Broadcast Properties with TLA William Hipschman Department of Computer Science The University of North Carolina at Chapel Hill Abstract Although group communication is vitally important

More information

T ransaction Management 4/23/2018 1

T ransaction Management 4/23/2018 1 T ransaction Management 4/23/2018 1 Air-line Reservation 10 available seats vs 15 travel agents. How do you design a robust and fair reservation system? Do not enough resources Fair policy to every body

More information

Introduction to Distributed Systems. INF5040/9040 Autumn 2018 Lecturer: Eli Gjørven (ifi/uio)

Introduction to Distributed Systems. INF5040/9040 Autumn 2018 Lecturer: Eli Gjørven (ifi/uio) Introduction to Distributed Systems INF5040/9040 Autumn 2018 Lecturer: Eli Gjørven (ifi/uio) August 28, 2018 Outline Definition of a distributed system Goals of a distributed system Implications of distributed

More information

Designing Issues For Distributed Computing System: An Empirical View

Designing Issues For Distributed Computing System: An Empirical View ISSN: 2278 0211 (Online) Designing Issues For Distributed Computing System: An Empirical View Dr. S.K Gandhi, Research Guide Department of Computer Science & Engineering, AISECT University, Bhopal (M.P),

More information

Accounting management system enhancement supporting automated monitoring and storing facilities

Accounting management system enhancement supporting automated monitoring and storing facilities Accounting management system enhancement supporting automated monitoring and storing facilities Abstract C. Bouras S. Kastaniotis 1 Computer Engineering and Informatics Department University of Patras,

More information

Course Description. Audience. Prerequisites. At Course Completion. : Course 40074A : Microsoft SQL Server 2014 for Oracle DBAs

Course Description. Audience. Prerequisites. At Course Completion. : Course 40074A : Microsoft SQL Server 2014 for Oracle DBAs Module Title Duration : Course 40074A : Microsoft SQL Server 2014 for Oracle DBAs : 4 days Course Description This four-day instructor-led course provides students with the knowledge and skills to capitalize

More information

Distributed Databases Systems

Distributed Databases Systems Distributed Databases Systems Lecture No. 01 Distributed Database Systems Naeem Ahmed Email: naeemmahoto@gmail.com Department of Software Engineering Mehran Univeristy of Engineering and Technology Jamshoro

More information

Effects of Local Autonomy on Global Concurrency Control in Heterogeneous Distributed Database Systems *

Effects of Local Autonomy on Global Concurrency Control in Heterogeneous Distributed Database Systems * Effects of Local Autonomy on Global Concurrency Control in Heterogeneous Distributed Database Systems * W. Du, A. K. Elmagarmid, Y. Leu and S. D. Ostermann Computer Sciences Department Purdue University

More information

Chapter 18: Database System Architectures.! Centralized Systems! Client--Server Systems! Parallel Systems! Distributed Systems!

Chapter 18: Database System Architectures.! Centralized Systems! Client--Server Systems! Parallel Systems! Distributed Systems! Chapter 18: Database System Architectures! Centralized Systems! Client--Server Systems! Parallel Systems! Distributed Systems! Network Types 18.1 Centralized Systems! Run on a single computer system and

More information

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY A PATH FOR HORIZING YOUR INNOVATIVE WORK ENHANCED DATA REPLICATION TECHNIQUES FOR DISTRIBUTED DATABASES SANDIP PANDURANG

More information

Common Database Deployment Gotchas

Common Database Deployment Gotchas Common Database Deployment Gotchas Simon D Morias SQL Server Consultant @ Sabin.io Microsoft Certified Master: SQL Server MCSE: Data Platform & Business Intelligence simon.dmorias@sabin.io Why database

More information

CS Reading Packet: "Transaction management, part 2"

CS Reading Packet: Transaction management, part 2 CS 325 - Reading Packet: "Transaction management, part 2" p. 1 Sources: CS 325 - Reading Packet: "Transaction management, part 2" * Ricardo, "Databases Illuminated", Chapter 10, Jones and Bartlett. * Kroenke,

More information

Configuring SNMP. Understanding SNMP CHAPTER

Configuring SNMP. Understanding SNMP CHAPTER 22 CHAPTER This chapter describes how to configure the Simple Network Management Protocol (SNMP) on the Catalyst 3750 switch. Unless otherwise noted, the term switch refers to a standalone switch and a

More information

Distributed System Chapter 16 Issues in ch 17, ch 18

Distributed System Chapter 16 Issues in ch 17, ch 18 Distributed System Chapter 16 Issues in ch 17, ch 18 1 Chapter 16: Distributed System Structures! Motivation! Types of Network-Based Operating Systems! Network Structure! Network Topology! Communication

More information

Modelling the Replication Management in Information Systems

Modelling the Replication Management in Information Systems Informatica Economică vol. 21, no. 1/2017 43 Modelling the Replication Management in Information Systems Cezar TOADER 1, Rita TOADER 2 1, 2 Technical University of Cluj-Napoca, Department of Economics,

More information

International Journal of Advanced Research in Computer Science and Software Engineering

International Journal of Advanced Research in Computer Science and Software Engineering Volume 3, Issue 7, July 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Analysis for Deadlock

More information

CSE 544 Principles of Database Management Systems. Alvin Cheung Fall 2015 Lecture 14 Distributed Transactions

CSE 544 Principles of Database Management Systems. Alvin Cheung Fall 2015 Lecture 14 Distributed Transactions CSE 544 Principles of Database Management Systems Alvin Cheung Fall 2015 Lecture 14 Distributed Transactions Transactions Main issues: Concurrency control Recovery from failures 2 Distributed Transactions

More information

Realisation of Active Multidatabases by Extending Standard Database Interfaces

Realisation of Active Multidatabases by Extending Standard Database Interfaces Realisation of Active Multidatabases by Extending Standard Database Interfaces Christopher Popfinger Institute of Computer Science - Database Systems Heinrich-Heine-University Düsseldorf D-40225 Düsseldorf,

More information

7 Fault Tolerant Distributed Transactions Commit protocols

7 Fault Tolerant Distributed Transactions Commit protocols 7 Fault Tolerant Distributed Transactions Commit protocols 7.1 Subtransactions and distribution 7.2 Fault tolerance and commit processing 7.3 Requirements 7.4 One phase commit 7.5 Two phase commit x based

More information

Achieving Robustness in Distributed Database Systems

Achieving Robustness in Distributed Database Systems Achieving Robustness in Distributed Database Systems DEREK L. EAGER AND KENNETH C. SEVCIK University of Toronto The problem of concurrency control in distributed database systems in which site and communication

More information

Configuring Secure Shell

Configuring Secure Shell Configuring Secure Shell Last Updated: October 24, 2011 The Secure Shell (SSH) feature is an application and a protocol that provides a secure replacement to the Berkeley r-tools. The protocol secures

More information

Failure Models. Fault Tolerance. Failure Masking by Redundancy. Agreement in Faulty Systems

Failure Models. Fault Tolerance. Failure Masking by Redundancy. Agreement in Faulty Systems Fault Tolerance Fault cause of an error that might lead to failure; could be transient, intermittent, or permanent Fault tolerance a system can provide its services even in the presence of faults Requirements

More information