Distributed Mutual Exclusion Algorithms

Size: px
Start display at page:

Download "Distributed Mutual Exclusion Algorithms"

Transcription

1 Chapter 9 Distributed Mutual Exclusion Algorithms 9.1 Introduction Mutual exclusion is a fundamental problem in distributed computing systems. Mutual exclusion ensures that concurrent access of processes to a shared resource or data is serialized, that is, executed in mutually exclusive manner. Mutual exclusion in a distributed system states that only one process is allowed to execute the critical section (CS) at any given time. In a distributed system, shared variables (semaphores) or a local kernel cannot be used to implement mutual exclusion. Message passing is the sole means for implementing distributed mutual exclusion. The decision as to which process is allowed access to the CS next is arrived at by message passing, in which each process learns about the state of all other processes in some consistent way. The design of distributed mutual exclusion algorithms is complex because these algorithms have to deal with unpredictable message delays and incomplete knowledge of the system state. There are three basic approaches for implementing distributed mutual exclusion: 1. Token based approach 2. Non-token based approach 3. Quorum based approach In the token-based approach, a unique token (also known as the PRIVILEGE message) is shared among the sites. A site is allowed to enter its CS if it possesses the token and it continues to hold the token until the execution of the CS is over. Mutual exclusion is ensured because the token is unique. The algorithms based on this approach essentially differ in the way a site carries out the search for the token. In the non-token based approach, two or more successive rounds of messages are exchanged among the sites to determine which site will enter the CS next. A site enters the critical section (CS) when an assertion, defined on its local variables, becomes true. Mutual exclusion is enforced because the assertion becomes true only at one site at any given time. In the quorum based approach, each site requests permission to execute the CS from a subset of sites (called a quorum). The quorums are formed in such a way that when two sites concurrently 293

2 request access to the CS, one site receives both the requests and which is responsible to make sure that only one request executes the CS at any time. In this chapter, we describe several distributed mutual exclusion algorithms and compare their features and performance. We discuss relationship among various mutual exclusion algorithms and examine trade-offs among them. 9.2 Preliminaries In this section, we describe the underlying system model, discuss the requirements that mutual exclusion algorithms should satisfy, and discuss what metrics we use to measure the performance of mutual exclusion algorithms System Model The system consists of N sites, S 1, S 2,..., S N. Without loss of generality, we assume that a single process is running on each site. The process at site S i is denoted by p i. All these processes communicate asynchronously over an underlying communication network. A process wishing to enter the CS, requests all other or a subset of processes by sending REQUEST messages, and waits for appropriate replies before entering the CS. While waiting the process is not allowed to make further requests to enter the CS. A site can be in one of the following three states: requesting the CS, executing the CS, or neither requesting nor executing the CS (i.e., idle). In the requesting the CS state, the site is blocked and can not make further requests for the CS. In the idle state, the site is executing outside the CS. In the token-based algorithms, a site can also be in a state where a site holding the token is executing outside the CS. Such state is refereed to as the idle token state. At any instant, a site may have several pending requests for CS. A site queues up these requests and serves them one at a time. We do not make any assumption regarding communication channels if they are FIFO or not. This is algorithm specific. We assume that channels reliably deliver all messages, sites do not crash, and the network does not get partitioned. Some mutual exclusion algorithms are designed to handle such situations. Many algorithms use Lamport-style logical clocks to assign a timestamp to critical section requests. Timestamps are used to decide the priority of requests in case the of a conflict. A general rule followed is that the smaller the timestamp of a request,the higher its priority to execute the CS. We use the following notations: N denotes the number of processes or sites involved in invoking the critical section, T denotes the average message delay, and E denotes the average critical section execution time Requirements of Mutual Exclusion Algorithms A mutual exclusion algorithm should satisfy the following properties: 294

3 Last site exits the CS Next site enters the CS time Synchronization delay Figure 9.1: Synchronization Delay 1. Safety Property: The safety property states that at any instant, only one process can execute the critical section. This is an essential property of a mutual exclusion algorithm. 2. Liveness Property: This property states the absence of deadlock and starvation. Two or more sites should not endlessly wait for messages which will never arrive. In addition, a site must not wait indefinitely to execute the CS while other sites are repeatedly executing the CS. That is, every requesting site should get an opportunity to execute the CS in finite time. 3. Fairness: Fairness in the context of mutual exclusion means that each process gets a fair chance to execute the CS. In mutual exclusion algorithms, the fairness property generally means the CS execution requests are executed in the order of their arrival (time is determined by a logical clock) in the system. The first property is absolutely necessary and the other two properties are considered important in mutual exclusion algorithms Performance Metrics The performance of mutual exclusion algorithms is generally measured by the following four metrics: Message complexity: It the number of messages that are required per CS execution by a site. Synchronization delay: After a site leaves the CS, it is the time required and before the next site enters the CS (see Figure 9.1). Note that normally one or more sequential message exchanges may be required after a site exits the CS and before next site can enter the CS. Response time: It is the time interval a request waits for its CS execution to be over after its request messages have been sent out (see Figure 9.2). Thus, response time does not include the time a request waits at a site before its request messages have been sent out. 295

4 CS Request arrives Its request messages sent out The site enters the CS The site exits the CS CS execution time Response Time time Figure 9.2: Response Time System throughput: It is the rate at which the system executes requests for the CS. If SD is the synchronization delay and E is the average critical section execution time, then the throughput is given by the following equation: system throughput=1/(sd+e) Generally, the value of a performance metric fluctuates statistically from request to request and we generally consider the average value of such a metric. Low and High Load Performance: The load is determined by the arrival rate of CS execution requests. Performance of a mutual exclusion algorithm depends upon the load and we often study the performance of mutual exclusion algorithms under two special loading conditions, viz., low load" and high load". Under low load conditions, there is seldom more than one request for the critical section present in the system simultaneously. Under heavy load conditions, there is always a pending request for critical section at a site. Thus, in heavy load conditions, after having executed a request, a site immediately initiates activities to execute its next CS request. A site is seldom in the idle state in heavy load conditions. For many mutual exclusion algorithms, the performance metrics can be computed easily under low and heavy loads through a simple mathematical reasoning. Best and Worst Case Performance: Generally, mutual exclusion algorithms have best and worst cases for the performance metrics. In the best case, prevailing conditions are such that a performance metric attains the best possible value. For example, in most mutual exclusion algorithms the best value of the response time is a round-trip message delay plus the CS execution time, 2T + E. Often for mutual exclusion algorithms, the best and worst cases coincide with low and high loads, respectively. For examples, the best and worst values of the response time are achieved when load is, respectively, low and high; in some mutual exclusion algorithms the best and the worse message traffic is generated at low and heavy load conditions, respectively. 296

5 9.3 Lamport s Algorithm Lamport developed a distributed mutual exclusion algorithm as an illustration of his clock synchronization scheme [12]. The algorithm is fair in the sense that a request for CS are executed in the order of their timestamps and time is determined by logical clocks. When a site processes a request for the CS, it updates its local clock and assigns the request a timestamp. The algorithm executes CS requests in the increasing order of timestamps. Every site S i keeps a queue, request_queue i, which contains mutual exclusion requests ordered by their timestamps. (Note that this queue is different from the queue that contains local requests for CS execution awaiting their turn.) This algorithm requires communication channels to deliver messages the FIFO order. The Algorithm Requesting the critical section: When a site S i wants to enter the CS, it broadcasts a REQUEST(ts i, i) message to all other sites and places the request on request_queue i. ((ts i, i) denotes the timestamp of the request.) When a site S j receives the REQUEST(ts i, i) message from site S i,places site S i s request on request_queue j and it returns a timestamped REPLY message to S i. Executing the critical section: Site S i enters the CS when the following two conditions hold: L1: S i has received a message with timestamp larger than (ts i, i) from all other sites. L2: S i s request is at the top of request_queue i. Releasing the critical section: Site S i, upon exiting the CS, removes its request from the top of its request queue and broadcasts a timestamped RELEASE message to all other sites. When a site S j receives a RELEASE message from site S i, it removes S i s request from its request queue. When a site removes a request from its request queue, its own request may come at the top of the queue, enabling it to enter the CS. Clearly, when a site receives a REQUEST, REPLY or RELEASE message, it updates its clock using the timestamp in the message. Correctness Theorem 1: Lamport s algorithm achieves mutual exclusion. Proof: Proof is by contradiction. Suppose two sites S i and S j are executing the CS concurrently. For this to happen conditions L1 and L2 must hold at both the sites concurrently. This 297

6 (1,1) S 1 S 2 (1,2) S 3 Figure 9.3: Sites S 1 and S 2 are Making Requests for the CS. implies that at some instant in time, say t, both S i and S j have their own requests at the top of their request_queues and condition L1 holds at them. Without loss of generality, assume that S i s request has smaller timestamp than the request of S j. From condition L1 and FIFO property of the communication channels, it is clear that at instant t the request of S i must be present in request_queue j when S j was executing its CS. This implies that S j s own request is at the top of its own request_queue when a smaller timestamp request, S i s request, is present in the request_queue j a contradiction!! Hence, Lamport s algorithm achieves mutual exclusion. Theorem 2: Lamport s algorithm is fair. Proof: A distributed mutual exclusion algorithm is fair if the requests for CS are executed in the order of their timestamps. The proof is by contradiction. Suppose a site S i s request has a smaller timestamp than the request of another site S j and S j is able to execute the CS before S i. For S j to execute the CS, it has to satisfy the conditions L1 and L2. This implies that at some instant in time say t, S j has its own request at the top of its queue and it has also received a message with timestamp larger than the timestamp of its request from all other sites. But request_queue at a site is ordered by timestamp, and according to our assumption S i has lower timestamp. So S i s request must be placed ahead of the S j s request in the request_queue j. This is a contradiction. Hence Lamport s algorithm is a fair mutual exclusion algorithm. An Example In Figures 9.3 to 9.6, we illustrate the operation of Lamport s algorithm. In Figure 9.3, sites S 1 and S 2 are making requests for the CS and send out REQUEST messages to other sites. The timestamps of the requests are (1, 1) and (1, 2), respectively. In Figure 9.4, both the sites S 1 and S 2 have received REPLY messages from all other sites. S 1 has its request at the top of its request_queue but site S 2 does not have its request at the top of its request_queue. Consequently, site S 1 enters the CS. In Figure 9.5, S 1 exits and sends RELEASE mesages to all other sites. In Figure 9.6, site S 2 has received REPLY from all other sites and also received a RELEASE message from site S 1. Site S 2 updates its request_queue and its request is now at the top of its request_queue. Consequently, it enters the CS next. 298

7 Site S1 enters the CS S S 1 2 (1,2) (1,1) S 3 Figure 9.4: Site S 1 enters the CS. Site S1 enters the CS Site S1 exits the CS (1,1) (1,1), (1,2) S 1 S 2 (1,2) (1,1), (1,2) (1,2) S 3 Figure 9.5: Site S 1 exits the CS and sends RELEASE messages. Site S1 enters the CS Site S1 exits the CS (1,1) (1,1), (1,2) S 1 S 2 (1,2) (1,1), (1,2) Site S2 enters the CS S 3 Figure 9.6: Site S 2 enters the CS 299

8 Performance For each CS execution, Lamport s algorithm requires (N 1) REQUEST messages, (N 1) REPLY messages, and (N 1) RELEASE messages. Thus, Lamport s algorithm requires 3(N 1) messages per CS invocation. Synchronization delay in the algorithm is T. An Optimization In Lamport s algorithm, REPLY messages can be omitted in certain situations. For example, if site S j receives a REQUEST message from site S i after it has sent its own REQUEST message with timestamp higher than the timestamp of site S i s request, then site S j need not send a REPLY message to site S i. This is because when site S i receives site S j s request with timestamp higher than its own, it can conclude that site S j does not have any smaller timestamp request which is still pending (because communication channels preserves FIFO ordering). With this optimization, Lamport s algorithm requires between 3(N 1) and 2(N 1) messages per CS execution. 9.4 Ricart-Agrawala Algorithm The Ricart-Agrawala [21] algorithm assumes the communication channels are FIFO. The algorithm uses two types of messages: REQUEST and REPLY. A process sends a REQUEST message to all other processes to request their permission to enter the critical section. A process sends a REPLY message to a process to give its permission to that process. Processes use Lamport-style logical clocks to assign a timestamp to critical section requests. Timestamps are used to decide the priority of requests in case of conflict if a process p i that is waiting to execute the critical section, receives a REQUEST message from process p j, then if the priority of p j s request is lower, p i defers the REPLY to p j and sends a REPLY message to p j only after executing the CS for its pending request. Otherwise, p i sends a REPLY message to p j immediately, provided it is currently not executing the CS. Thus, if several processes are requesting execution of the CS, the highest priority request succeeds in collecting all the needed REPLY messages and gets to execute the CS. Each process p i maintains the Request-Deferred array, RD i, the size of which is the same as the number of processes in the system. Initially, i j: RD i [j]=0. Whenever p i defer the request sent by p j, it sets RD i [j]=1 and after it has sent a REPLY message to p j, it sets RD i [j]= Description of the Algorithm Requesting the critical section: (a) When a site S i wants to enter the CS, it broadcasts a timestamped REQUEST message to all other sites. 300

9 (b) When site S j receives a REQUEST message from site S i, it sends a REPLY message to site S i if site S j is neither requesting nor executing the CS, or if the site S j is requesting and S i s request s timestamp is smaller than site S j s own request s timestamp. Otherwise, the reply is deferred and S j sets RD j [i]=1 Executing the critical section: (c) Site S i enters the CS after it has received a REPLY message from every site it sent a REQUEST message to. Releasing the critical section: (d) When site S i exits the CS, it sends all the deferred REPLY messages: j if RD i [j]=1, then send a REPLY message to S j and set RD i [j]=0. When a site receives a message, it updates its clock using the timestamp in the message. Also, when a site takes up a request for the CS for processing, it updates its local clock and assigns a timestamp to the request. In this algorithm, a site s REPLY messages are blocked only by sites which are requesting the CS with higher priority (i.e., smaller timestamp).thus, when a site sends out deffered REPLY messages, site with the next highest priority request receives the last needed REPLY message and enters the CS. Execution of the CS requests in this algorithm is always in the order of their timestamps. Correctness Theorem 3: Ricart-Agrawala algorithm achieves mutual exclusion. Proof: Proof is by contradiction. Suppose two sites S i and S j are executing the CS concurrently and S i s request has higher priority (i.e., smaller timestamp) than the request of S j. Clearly, S i received S j s request after it has made its own request. (Otherwise, S i s request will have lower priority.) Thus, S j can concurrently execute the CS with S i only if S i returns a REPLY to S j (in response to S j s request) before S i exits the CS. However, this is impossible because S j s request has lower priority.therefore, Ricart-Agrawala algorithm achieves mutual exclusion. In Ricart-Agrawala algorithm, for every requesting pair of sites, the site with higher priority request will always defer the request of the lower priority site. At any time only the highest priority request succeeds in getting all the needed REPLY messages. An Example Figures 9.7 to 9.10 illustrate the operation of Ricart-Agrawala algorithm. In Figure 9.7, sites S 1 and S 2 are making requests for the CS and send out REQUEST messages to other sites. The timestamps of the requests are (2, 1) and (1, 2), respectively. In Figure 9.8, S 2 has received REPLY messages from all other sites and consequently, it enters the CS. In Figure 9.9, S 2 exits the CS and sends a REPLY mesage to site S 1. In Figure 9.10, site S 1 has received REPLY from all other sites and enters the CS next. 301

10 S 1 (1,1) S 2 (1,2) S 3 Figure 9.7: Sites S 1 and S 2 are making request for the CS S 1 Request is deferred Site S 1 enters the CS (1,1) S 2 (1,2) S 3 Figure 9.8: Site S 1 enters the CS (1,1) Request is deferred Site S 1 enters the CS Site S 1 exits the CS S 1 S 2 (1,2) S 3 Figure 9.9: Site S 1 exits the CS and sends a REPLY message to S 2 s deferred request 302

11 (1,1) Request is deferred Site S 1 enters the CS Site S 1 exits the CS S 1 S 2 (1,2) Site S 2 enters the CS S 3 Figure 9.10: Site S 2 enters the CS Performance For each CS execution, Ricart-Agrawala algorithm requires (N 1) REQUEST messages and (N 1) REPLY messages. Thus, it requires 2(N 1) messages per CS execution. Synchronization delay in the algorithm is T. 9.5 Singhal s Dynamic Information-Structure Algorithm Most mutual exclusion algorithms use a static approach to invoke mutual exclusion; i.e., they always take the same course of actions to invoke mutual exclusion no matter what is the state of the system. A problem with these algorithms is the lack of efficiency because these algorithms fail to exploit the changing conditions in the system. Note that an algorithm can exploit dynamic conditions of the system to optimize the performance. For example, if few sites are invoking mutual exclusion very frequently and other sites invoke mutual exclusion much less frequently, then a frequently invoking site need not ask for the permission of less frequently invoking site every time it requests an access to the CS. It only needs to take permission from all other frequently invoking sites. Singhal [29] developed an adaptive mutual exclusion algorithm based on this observation. The information-structure of the algorithm evolves with time as sites learn about the state of the system through messages. Dynamic informationstructure mutual exclusion algorithms are attractive because they can adapt to fluctuating system conditions to optimize the performance. The design of such adaptive mutual exclusion algorithms is challenging and we list some of the design challenges next: How does a site efficiently know what sites are currently actively invoking mutual exclusion? When a less frequently invoking site needs to invoke mutual exclusion, how does it do it? 303

12 How does a less frequently invoking site makes a transition to more frequently invoking site and vice-versa. How to insure that mutual exclusion is guaranteed when a site does not take the permission of every other site. How to insure that a dynamic mutual exclusion algorithm does not waste resources and time in collecting systems state, offsetting any gain. System Model We consider a distributed system consisting of n autonomous sites, say S 1, S 2,..., S n, which are connected by a communication network. We assume that the sites communicate completely by message passing. Message propagation delay is finite but unpredictable and between any pair of sites, messages are delivered in the order they are sent. For the ease of presentation, we assume that the underlying communication network is reliable and sites do not crash. However, methods have been proposed for recovery from message losses and site failures. Data Structures Information-structure at a site S i consists of two sets. The first set R i, called request set, contains the sites from which S i must acquire permission before executing CS. The second set I i, called inform set, contains the sites to which S i must send its permission to execute CS after executing its CS. Every site S i maintains a logical clock C i, which is updated according to Lamport s rules. Every request for CS execution is assigned a timestamp which is used to determine its priority. The smaller the timestamp of a request, the higher its priority. Every site maintains three boolean variables to denote the state of the site: Requesting, Executing, and My_priority. Requesting and executing are true if and only if the site is requesting or executing CS, respectively. My_priority is true if pending request of S i has priority over the current incoming request. Initialization The system starts in the following initial state: For a site S i (i = 1 to n), R i := {S 1, S 2,..., S i 1, S i } I i := S i C i := 0 Requesting = Executing := False 304

13 Thus, initially site S i, 1 i n, sends request messages only to sites S i, S i 1,..., S 1. If we stagger sites S n to S 1 from left to right, then the initial system state has the following two properties: 1. Each site requests permission from all the sites to its right and from no site to its left. Conversely, for a site, all the sites to its left asks for its permission and no site to its right asks for its permission. Or putting together, for a site, only all the sites to its left will ask for its permission and it will ask for the permission of only all the sites to its right. Therefore, every site S i divides all the sites into two disjoint groups; all the sites in the first group request permission from S i and S i requests permission from all the sites in the second group. This property is important for enforcing mutual exclusion. 2. The cardinality of R i decreases in stepwise manner from left to right. Due to this reason, this configuration has been called "staircase pattern" in topological sense [27] The Algorithm Site S i executes the following three steps to invoke mutual exclusion: Step 1: (Request Critical Section) Requesting = true; C i = C i + 1; Send REQUEST(C i, i) message to all sites in R i ; Wait until R i = ; /* Wait until all sites in R i have sent a reply to S i */ Requesting = false; Step 2: (Execute Critical Section) Executing = true; Execute CS; Executing = false; Step 3: (Release Critical Section) For every site S k in I i (except S i ) do Begin I i = I i {S k }; Send REPLY(C i, i) message to S k ; R i = R i + {S k } End 305

14 REQUEST message handler REQUEST message handler at a site processes incoming REQUEST messages. It takes actions such as updating information-structure and sending REQUEST/REPLY messages to other sites. REQUEST message handler at site S i is given below: /* Site S i is handling message REQUEST(c, j) */ C i := max{c i, c}; Case Requesting = true: Begin if My_priority then I i := I i + { j } /*My_Priority is true if the pending request of S i has priority over the incoming request */ Else Begin Send REPLY(C i, i) message to S j ; If not (S j R i ) then Begin R i = R i + {S j }; Send REQUEST(C i, i) message to site S j ; End; End; End; Executing = true: I i = I i + {S j }; Executing = false Requesting = false: Begin R i = R i + {S j }; Send REPLY(C i, i) message to S j ; End; REPLY message handler The REPLY message handler at a site processes incoming REPLY messages. It updates the information-structure. REPLY message handler at site S i is given below: /* Site S i is handling a message REPLY(c, j) */ Begin C i := max{c i, c}; R i = R i {S j }; End; Note that REQUEST and REPLY message handlers and the steps of the algorithm access shared data structures, viz., C i, R i, and I i. To guarantee the correctness, it s important that execution of REQUEST and REPLY message handlers and all three steps of the algorithm (except "wait for R i 306

15 = to hold" in Step 1) mutually exclude each other. An Explanation of the Algorithm At high level, S i acquires permission to execute the CS from all sites in its request set R i and it releases the CS by sending a REPLY message to all sites in its inform set I i. If site S i which itself is requesting the CS, receives a higher priority REQUEST message from a site S j, then S i takes the following actions: (i) S i immediately sends a REPLY message to S j, (ii) if S j is not in R i, then 1 S i also sends a REQUEST message to S j, and (iii) S i places an entry for S j in R i. Otherwise, (i.e., if the request of S i has priority over the request of S j ), S i places an entry for S j into I i so that S j can be sent a REPLY message when S i finishes with the execution of the CS. If S i receives a REQUEST message from S j when it is executing the CS, then it simply puts S j in I i so that S j can be sent a REPLY message when S i finishes with the execution of the CS. If S i receives a REQUEST message from S j when it is neither requesting nor executing the CS, then it places an entry for S j in R i and sends S j a REPLY message. Rules for information exchange and updating request and inform sets are such that the staircase pattern is preserved in the system even after the sites have executed the CS any number of times. However, the positions of sites in the staircase pattern change as the system evolves. (For a proof of this, see [28]). The site to execute CS last positions itself at the right end of the staircase pattern Correctness We informally discuss why the algorithm achieves mutual exclusion and why it is free from deadlocks. For a formal proof, the readers are referred to [28]. Achieving Mutual Exclusion: Note that the initial state of the information-structure satisfies the following condition: for every S i and S j, either S j R i or S i R j. Therefore, if two sites request CS, one of them will always ask for the permission of the another. However, this is not sufficient for mutual exclusion [18]; However, whenever there is a conflict between two sites (i.e., they concurrently invoke mutual exclusion), the sites dynamically adjust their request sets such that both request permission of each other satisfying the condition for mutual exclusion. This is a nice feature of the algorithm because if the information-structures of sites satisfy the condition for mutual exclusion all the time, sites will exchange more messages. Instead, it is more desirable to dynamically adjust the request set of the sites as and when needed to insure mutual exclusion because it optimizes the number of messages exchanged. 1 Absence of S j from R i implies that S i has not previously sent a REQUEST message to S j. This is the reason why S i also sends a REQUEST message to S j when it receives a REQUEST message from S j. This step is also required to preserve the staircase pattern of the information-structure of the system. 307

16 Freedom from Deadlocks: In the algorithm, each request is assigned a globally unique timestamp which determines its priority. The algorithm is free from deadlocks because sites use timestamp ordering (which is unique system wide) to decide request priority and a request is blocked by only higher priority requests. An Example Consider a system with five sites S 1,...,S 5. Suppose S 2 and S 3 want to enter the CS concurrently, and they both send appropriate request messages. S 3 sends a request message to sites in its Request set - {S 1, S 2 } and S 2 sends a request message to the only site in its Request set {S 1 }. There are three possible scenarios: 1. If timestamp of S 3 s request is smaller, then on receiving S 3 s request, S 2 sends a REPLY message to S 3. S 2 also adds S 3 to its Request set and sends S 3 a REQUEST message. On receiving a REPLY message from S 2, S 3 removes S 2 from its Request set. S 1 sends a REPLY to both S 2 and S 3 because its neither requesting to enter the CS nor executing the CS. S 1 adds S 2 and S 3 to its Request set because any one of these sites could possibly be in the CS when S 1 requests for an entry into CS in the future. On receiving S 1 s REPLY message, S 3 removes S 1 from its Request set and since it has got REPLY messages from all sites in its (initial) Request Set, it enters the CS. 2. If timestamp of S 3 is larger, then on receiving S 3 s request, S 2 adds S 3 to its Inform set. When S 2 gets a REPLY from S 1, it enters the CS. When S 2 relinquishes the CS, it informs S 3 (id of S 3 is present in S 2 s Inform set) about its consent to enter the CS. Then, S 2 removes S 3 from its Inform set and add S 3 to its Request set. This is logical because S 3 could be executing in CS when S 2 requests an CS entry permission in the future. 3. If S 2 receives a REPLY from S 1 and starts executing CS before S 3 s REQUEST reaches S 2, S 2 simply adds S 3 to its Inform set, and sends S 3 a REPLY after exiting the CS Performance Analysis The synchronization delay in the algorithm is T. Below, we compute the message complexity in low and heavy loads. Low load condition: In case of low traffic of CS requests, most of the time only one or no request for the CS will be present in the system. Consequently, the staircase pattern will reestablish between two sucssive requests for CS and there will seldom be an interference among the CS requests from different sites. In the staircase configuration, cardinality of the request sets of the sites is 1, 2,..., (n-1), n, respectively, from right to left. Therefore, when traffic of requests for CS is low, sites will send 0, 1, 2,..., (n-1) number of REQUEST messages with equal likelihood (assuming uniform traffic of CS requests at sites). Therefore, the mean number of REQUEST messages sent 308

17 per CS execution for this case is =( (n-1))/n = (n-1)/2. Since a REPLY message is returned for every REQUEST message, the average number of messages exchanged per CS execution is 2*(n-1)/2 = (n-1). Heavy load condition: When the rate of CS requests is high, all the sites always have a pending request for CS execution. In this case, a site on the average receives (n-1)/2 REQUEST messages from other sites while waiting for its REPLY messages. Since a site sends REQUEST messages only in response to REQUEST messages of higher priority, on the average it will send (n-1)/4 RE- QUEST messages while waiting for REPLY messages. Therefore, the average number of messages exchanged per CS execution in high demand is 2*[(n-1)/2+(n-1)/4] = 3*(n-1)/ Adaptivity in Heterogeneous Traffic Patterns An interesting feature of the algorithm is that its information-structure adapts itself to the environments of heterogeneous traffic of CS requests and to statistical fluctuations in traffic of CS requests to optimize the performance (the number of messages exchanged per CS execution). In non-uniform traffic environments, sites with higher traffic of CS requests will position themselves towards the right end of the staircase pattern. That is, sites with higher traffic of CS requests will tend to have lower cardinality of their request sets. Also, at a high traffic site S i, if S j R i, then S j is also a high traffic site (this comes intuitively because all high traffic sites will cluster towards the right end of the staircase). Consequently, high traffic sites will mostly send REQUEST messages only to other high traffic sites and will seldom send REQUEST messages to sites with low traffic. This adaptivity results in a reduction in the number of messages as well as in delay in granting CS in environments of heterogeneous traffic. 9.6 Lodha and Kshemkalyani s Fair Mutual Exclusion Algorithm Lodha and Kshemakalyani s algorithm [13] decreases the message complexity of Ricart-Agrawala algorithm by using the following interesting observation: When a site is waiting to execute the CS, it need not receive REPLY messages from every other site. To enter the CS, a site only needs to receive a REPLY message from the site whose request just precedes its request in priority. For example, if sites S i1,s i2,..s in have a pending request for CS and the request of S i1 has the highest priority and that of S in has the lowest priority and the priority of requests decreases from S i1 to S in, then a site S ik only needs a REPLY message from site S ik 1, 1 < k n to enter the CS System Model Each request is assigned a priority ReqID and requests for the CS access are granted in the order of decreasing priority. We will defer the details of what ReqID is composed of to later sections. 309

18 The underlying communication network is assumed to be error free. Definition 1: R i and R j are concurrent iff P i s REQUEST message is received by P j after P j has made its request and P j s REQUEST message is received by P i after P i has made its request. Definition 2: Given R i, we define the concurrency set of R i as follows: CSet i = {R j R i is concurrent with R j } {R i } The Algorithm The algorithm uses three types of messages: REQUEST, REPLY and FLUSH and obtains savings on the number of messages exchanged per CS access by assigning multiple purposes to each. For the purpose of blocking a mutual exclusion request, every site S i has a data structure called local_request_queue (denoted as LRQ i ) which contains all concurrent requests made with respect to S i s request and these requests are ordered with respect to the priority. All requests are totally ordered by their priorities and the priority is determined by the timestamp of the request. Hence, when a process receives a REQUEST message from some other process, it can immediately determine if it is allowed the CS access before the requesting process or after it. In this algorithm, messages play multiple roles and that is discussed next. Multiple uses of a REPLY message 1. A REPLY message acts as a reply from a process that is not requesting. 2. A REPLY message acts as a collective reply from processes that have higher priority requests. A REPLY(R j ) from a process P j indicates that R j is the request made by P j for which it has executed the CS. It also indicates that all the requests with priority priority of R j have finished executing CS and are no longer in contention. Thus, in such situations, a REPLY message is a logical reply and denotes a collective reply from all processes that had made higher priority requests. Uses of a FLUSH message Similar to a REPLY message, a FLUSH message is a logical reply and denotes a collective reply from all processes that had made higher priority requests. After a process has exited the CS, it sends a FLUSH message to a process requesting with the next highest priority, which is determined by looking up the process s local request queue. When a process P i finishes executing the CS, it may find a process P j in one of the following states: 310

19 1. R j is in the local queue of P i and located in some position after R i, which implies that R j is concurrent with R i. 2. P j had replied to R i and P j is now requesting with a lower priority. (Note that in this case R i and R j are not concurrent). 3. P j s requst had higher priority than P i s (implying that it had finished the execution of the CS) and is now requesting with a lower priority. (Note that in this case R i and R j are not concurrent). A process P i after executing the CS, sends a FLUSH message to a process identified in the Case 1 above, which has the next highest priority, whereas it sends REPLY messages to the processes identified in Cases 2 and 3 as their requests are not concurrent with R i (the resuests of processes in Cases 2 and 3 were deferred by P i till it exits the CS). Now it is up to the process receiving the FLUSH message and the processes recieving REPLY messages in Cases 2 and 3 to determine who is allowed to enter the CS next. Consider a scenario where we have a set of requests R 3 R 0 R 2 R 4 R 1 ordered in decreasing priority where R 0 R 2 R 4 are concurrent with one another, then P 0 maintains a local queue of [ R 0, R 2, R 4 ] and when it exits the CS, it sends a FLUSH (only) to P 2. Multiple uses of a REQUEST message Considering two processes P i and P j, there can be two cases: Case 1: P i and P j are not concurrently requesting. In this case, the process which requests first will get a REPLY message from the other process. Case 2: P i and P j are concurrently requesting. In this case, there can be two subcases: The Algorithm 1. P i is requesting with a higher priority than P j. In this case, P j s REQUEST message serves as an implicit REPLY message to P i s request. Also, P j should wait for REPLY/FLUSH message from some process to enter the CS. 2. P i is requesting with a lower priority than P j. In this case, P i s REQUEST message serves as an implicit REPLY message to P j s request. Also, P i should wait for REPLY/FLUSH message from some process to enter the CS. Initial local state for process P i - int My_Sequence_Number i =0 - array of boolean RV i [j]=0, j {1...N} - queue of ReqID LRQ i is NULL - int Highest_Sequence_Number_Seen i =0 311

20 InvMutEx: Process P i executes the following to invoke mutual exclusion: 1. My_Sequence_Number i = Highest_Sequence_Number_Seen i LRQ i = NULL 3. Make REQUEST(R i ) message, where R i = (My_Sequence_Number i, i). 4. Insert this REQUEST in the LRQ i in sorted order. 5. Send this REQUEST message to all other processes. 6. RV i [k]=0 k {1...N}-{i}. RV i [i]=1. RcvReq: Process P i receives REQUEST(R j ), where R j =(SN, j), from process P j : 1. Highest_Sequence_Number_Seen i = max(highest_sequence_number_seen i, SN). 2. if P i is requesting: (a) if RV i [j] = 0, then insert this request in the LRQ i (in sorted order) and mark RV i [j] = 1. If (CheckExecuteCS), then execute CS. (b) if RV i [j] = 1, then defer the processing of this request, which will be processed after P i executes CS. 3. If P i is not requesting, then send a REPLY(R i ) message to P j. R i denotes the ReqID of the last request made by P i that was satisfied. RcvReply: Process P i receives REPLY(R j ) message from process P j : R j denotes the ReqID of the last request made by P j that was satisfied. 1. RV i [j]=1 2. Remove all requests from LRQ i that have a priority the priority of R j 3. If(CheckExecuteCS), then execute CS. FinCS: Process P i finishes executing CS. 1. Send FLUSH(R i ) message to the next candidate in the LRQ i. R i denotes the ReqID that was satisfied. 2. Send REPLY(R i ) to the deferred requests. R i is the ReqID corresponding to which P i just executed the CS. RcvFlush: Process P i receives a FLUSH(R j ) message from a process P j : 1. RV i [j] = 1 2. Remove all requests in LRQ i that have the priority the priority of R j. 3. If (CheckExecuteCS) then execute CS. CheckExecuteCS: if (RV i [k]=1, k {1..N}) and P i s request is at the head of LRQ i, then return true, else return false. 312

21 P 1 P 2 P 3 Figure 9.11: Processes P 1 and P 2 send out REQUESTs Safety, Fairness and Liveness Prrofs for safety, fairness and liveness are quite involved and interested readers are referred to the original paper for detailed proofs An Example Figure 9.11: Processes P 1 and P 2 are concurrent and they send out REQUESTs to all other processes. The REQUEST sent by P 1 to P 3 is delayed and hence is not shown until in Figure Figure 9.12: When P 3 receives the REQUEST from P 2, it sends REPLY to P 2. Figure 9.13: The delayed REQUEST of P 1 arrives at P 3 and at the same time, P 3 sends out its REQUEST for CS, which makes it concurrent with the request of P 1. Figure 9.14: P 1 exits the CS and sends out a FLUSH message to P 2. Figure 9.15: Since the requests of P 2 and P 3 are not concurrent, P 2 sends a FLUSH message to P 3. P 3 removes (1,1) from its local queue and enters the CS. The data structures LRQ and RV are updated in each step as discussed previously Message Complexity To execute the CS, a process P i sends N-1 REQUEST messages. It receives (N- CSet i ) REPLY messages. There are two cases to consider: 1. CSet i 2. There are two sub cases here. a. There is at least one request in CSet i whose priority is smaller than that of R i. So P i will send one FLUSH message. In this case the total number of messages for CS access is 2N- CSet i. When all the requests are concurrent, this reduces to N messages. 313

22 P 1 P 2 P 3 Figure 9.12: P 3 sends out REPLY to just P 2 P 1 enters the CS P 1 P 2 P 3 Figure 9.13: P 3 sends out REQUEST P 1 enters the CS P 1 P1 sends a FLUSH message to P 2 P 2 P 3 Figure 9.14: P 1 exits the CS and sends out a FLUSH message to P 2 314

23 P 1 enters the CS P 1 P 2 P sends a FLUSH 1 message to P 2 P 2 sends a FLUSH message to P 3 P 3 P 3 enters the CS REQUEST from P 1 REQUEST from P 2 REQUEST from P 3 REPLY from P 3 FLUSH message Figure 9.15: P 3 enters the CS 315

24 b. There is no request in CSet i, whose priority is less than the priority of R i. P i will not send a FLUSH message. In this case, the total number of messages for CS access is 2N-1- CSet i. When all the requests are concurrent, this reduces to N-1 messages. 2. CSet i = 1. This is the worst case, implying that all requests are satisfied serially. P i will not send a FLUSH message. In this case, the total number of messages for CS access is 2(N-1) messages. 9.7 Quorum-Based Mutual Exclusion Algorithms Quorum-based mutual exclusion algorithms respresented a departure from the trend in the following two ways: 1. A site does not request permission from all other sites, but only from a subset of the sites. This is a radically different approach as compared to Lamport and Ricart-Agrawala algorithms where all sites participate in conflict resolution of all other sites. In quorum-based mutual exclusion algorithm, the request set of sites are chosen such that i j : 1 i, j N :: R i R j Φ. Consequently, every pair of sites has a site which mediates conflicts between that pair. 2. In quorum-based mutual exclusion algorithm, a site can send out only one REPLY message at any time. A site can send a REPLY message only after it has received a RELEASE message for the previous REPLY message. Therefore, a site S i locks all the sites in R i in exclusive mode before executing its CS. Quorum-based mutual exclusion algorithms significantly reduce the message complexity of invoking mutual exclusion by having sites ask permission from only a subset of sites. Since these algorithms are based on the notion of Coteries and Quorums, we first describe the idea of coteries and quorums. A coterie C is defined as a set of sets, where each set g C is called a quorum. The following properties hold for quorums in a coterie: Intersection property: For every quorum g, h C, g h. For example, sets {1,2,3}, {2,5,7} and {5,7,9} cannot be quorums in a coterie because the first and third sets do not have a common element. Minimality property: There should be no quorums g, h in coterie C such that g h. For example, sets {1,2,3} and {1,3} cannot be quorums in a coterie because the first set is a superset of the second. Coteries and quorums can be used to develop algorithms to ensure mutual exclusion in a distributed environment. A simple protocol works as follows: Let a is a site in quorum A. If a wants to invoke mutual exclusion, it requests permission from all sites in its quorum A. Every 316

25 site does the same to invoke mutual exclusion. Due to the Intersection Property, quorum A contains at least one site that is common to the quorum of every other site. These common sites send permission to only one site at any time. Thus, mutual exclusion is guaranteed. Note that the Minimality property ensures efficiency rather than correctness. In the simplest form, quorums are formed as sets that contain a majority of sites. There exists a variety of quorums and a variety of ways to construct quorums. For example, Maekawa used the theory of projective planes to develop quorums of size N. 9.8 Maekawa s Algorithm Maekawa s algorithm [14] was the first quorum-based mutual exclusion algorithm. The request sets for sites (i.e., quorums) in Maekawa s algorithm are constructed to satisfy the following conditions: M1: ( i j : i j, 1 i, j N :: R i R j φ) M2: ( i : 1 i N :: S i R i ) M3: ( i : 1 i N :: R i = K) M4: Any site S j is contained in K number of R i s, 1 i, j N. Maekawa used the theory of projective planes and showed that N = K(K 1) + 1. This relation gives R i = N. Since there is at least one common site between the request sets of any two sites (condition M1), every pair of sites has a common site which mediates conflicts between the pair. A site can have only one outstanding REPLY message at any time; that is, it grants permission to an incoming request if it has not granted permission to some other site. Therefore, mutual exclusion is guaranteed. This algorithm requires delivery of messages to be in the order they are sent between every pair of sites. Conditions M1 and M2 are necessary for correctness; whereas conditions M3 and M4 provide other desirable features to the algorithm. Condition M3 states that the size of the requests sets of all sites must be equal implying that all sites should have to do equal amount of work to invoke mutual exclusion.condition M4 enforces that exactly the same number of sites should request permission from any site implying that all sites have equal responsibility in granting permission to other sites The Algorithm In Maekawa s algorithm, a site S i executes the following steps to execute the CS. Requesting the critical section 317

26 (a) A site S i requests access to the CS by sending REQUEST(i) messages to all sites in its request set R i. (b) When a site S j receives the REQUEST(i) message, it sends a REPLY(j) message to S i provided it hasn t sent a REPLY message to a site since its receipt of the last RELEASE message. Otherwise, it queues up the REQUEST(i) for later consideration. Executing the critical section (c) Site S i executes the CS only after it has received a REPLY message from every site in R i. Releasing the critical section (d) After the execution of the CS is over, site S i sends a RELEASE(i) message to every site in R i. (e) When a site S j receives a RELEASE(i) message from site S i, it sends a REPLY message to the next site waiting in the queue and deletes that entry from the queue. If the queue is empty, then the site updates its state to reflect that it has not sent out any REPLY message since the receipt of the last RELEASE message. Correctness Theorem 3: Maekawa s algorithm achieves mutual exclusion. Proof: Proof is by contradiction. Suppose two sites S i and S j are concurrently executing the CS. This means site S i received a REPLY message from all sites in R i and concurrently site S j was able to receive a REPLY message from all sites in R j. If R i R j = {S k }, then site S k must have sent REPLY messages to both S i and S j concurrently, which is a contradiction. Performance Note that the size of a request set is N. Therefore, an execution of the CS requires N RE- QUEST, N REPLY, and N RELEASE messages, resulting in 3 N messages per CS execution. Synchronization delay in this algorithm is 2T. This is because after a site S i exits the CS, it first releases all the sites in R i and then one of those sites sends a REPLY message to the next site that executes the CS. Thus, two sequential message transfers are required between two successive CS executions. As discussed next, Maekawa s algorithm is deadlock-prone. Measures to handle deadlocks require additional messages Problem of Deadlocks Maekawa s algorithm can deadlock because a site is exclusively locked by other sites and requests are not prioritized by their timestamps [14, 22]. Thus, a site may send a REPLY message to a site and later force a higher priority request from another site to wait. 318

Coordination and Agreement

Coordination and Agreement Coordination and Agreement Nicola Dragoni Embedded Systems Engineering DTU Informatics 1. Introduction 2. Distributed Mutual Exclusion 3. Elections 4. Multicast Communication 5. Consensus and related problems

More information

Distributed Mutual Exclusion

Distributed Mutual Exclusion Distributed Mutual Exclusion Mutual Exclusion Well-understood in shared memory systems Requirements: at most one process in critical section (safety) if more than one requesting process, someone enters

More information

A DAG-BASED ALGORITHM FOR DISTRIBUTED MUTUAL EXCLUSION ATHESIS MASTER OF SCIENCE

A DAG-BASED ALGORITHM FOR DISTRIBUTED MUTUAL EXCLUSION ATHESIS MASTER OF SCIENCE A DAG-BASED ALGORITHM FOR DISTRIBUTED MUTUAL EXCLUSION by Mitchell L. Neilsen ATHESIS submitted in partial fulfillment of the requirements for the degree MASTER OF SCIENCE Department of Computing and Information

More information

Distributed Systems. coordination Johan Montelius ID2201. Distributed Systems ID2201

Distributed Systems. coordination Johan Montelius ID2201. Distributed Systems ID2201 Distributed Systems ID2201 coordination Johan Montelius 1 Coordination Coordinating several threads in one node is a problem, coordination in a network is of course worse: failure of nodes and networks

More information

Coordination 1. To do. Mutual exclusion Election algorithms Next time: Global state. q q q

Coordination 1. To do. Mutual exclusion Election algorithms Next time: Global state. q q q Coordination 1 To do q q q Mutual exclusion Election algorithms Next time: Global state Coordination and agreement in US Congress 1798-2015 Process coordination How can processes coordinate their action?

More information

CS 455: INTRODUCTION TO DISTRIBUTED SYSTEMS [DISTRIBUTED MUTUAL EXCLUSION] Frequently asked questions from the previous class survey

CS 455: INTRODUCTION TO DISTRIBUTED SYSTEMS [DISTRIBUTED MUTUAL EXCLUSION] Frequently asked questions from the previous class survey CS 455: INTRODUCTION TO DISTRIBUTED SYSTEMS [DISTRIBUTED MUTUAL EXCLUSION] Shrideep Pallickara Computer Science Colorado State University L23.1 Frequently asked questions from the previous class survey

More information

CS455: Introduction to Distributed Systems [Spring 2018] Dept. Of Computer Science, Colorado State University

CS455: Introduction to Distributed Systems [Spring 2018] Dept. Of Computer Science, Colorado State University CS 455: INTRODUCTION TO DISTRIBUTED SYSTEMS [DISTRIBUTED MUTUAL EXCLUSION] Frequently asked questions from the previous class survey Yes. But what really is a second? 1 second ==time for a cesium 133 atom

More information

Frequently asked questions from the previous class survey

Frequently asked questions from the previous class survey CS 455: INTRODUCTION TO DISTRIBUTED SYSTEMS [DISTRIBUTED COORDINATION/MUTUAL EXCLUSION] Shrideep Pallickara Computer Science Colorado State University L22.1 Frequently asked questions from the previous

More information

CS455: Introduction to Distributed Systems [Spring 2018] Dept. Of Computer Science, Colorado State University

CS455: Introduction to Distributed Systems [Spring 2018] Dept. Of Computer Science, Colorado State University Frequently asked questions from the previous class survey CS 455: INTRODUCTION TO DISTRIBUTED SYSTEMS [DISTRIBUTED COORDINATION/MUTUAL EXCLUSION] Shrideep Pallickara Computer Science Colorado State University

More information

CSE 486/586 Distributed Systems

CSE 486/586 Distributed Systems CSE 486/586 Distributed Systems Mutual Exclusion Steve Ko Computer Sciences and Engineering University at Buffalo CSE 486/586 Recap: Consensus On a synchronous system There s an algorithm that works. On

More information

International Journal of Computer Engineering and Applications, Volume XII, Issue I, Jan. 18, ISSN

International Journal of Computer Engineering and Applications, Volume XII, Issue I, Jan. 18,   ISSN A PRIORITY AND REQUEST BASED MUTUAL EXCLUSION ALGORITHM FOR DISTRIBUTED SYSTEM Soumik Guha Roy 1, Sowvik Dey 2, Joy Samadder 3 1, 2, 3 Assistant Professor, Department of Computer Science & Engineering

More information

Clock Synchronization. Synchronization. Clock Synchronization Algorithms. Physical Clock Synchronization. Tanenbaum Chapter 6 plus additional papers

Clock Synchronization. Synchronization. Clock Synchronization Algorithms. Physical Clock Synchronization. Tanenbaum Chapter 6 plus additional papers Clock Synchronization Synchronization Tanenbaum Chapter 6 plus additional papers Fig 6-1. In a distributed system, each machine has its own clock. When this is the case, an event that occurred after another

More information

Several of these problems are motivated by trying to use solutiions used in `centralized computing to distributed computing

Several of these problems are motivated by trying to use solutiions used in `centralized computing to distributed computing Studying Different Problems from Distributed Computing Several of these problems are motivated by trying to use solutiions used in `centralized computing to distributed computing Problem statement: Mutual

More information

Process Synchroniztion Mutual Exclusion & Election Algorithms

Process Synchroniztion Mutual Exclusion & Election Algorithms Process Synchroniztion Mutual Exclusion & Election Algorithms Paul Krzyzanowski Rutgers University November 2, 2017 1 Introduction Process synchronization is the set of techniques that are used to coordinate

More information

A Dag-Based Algorithm for Distributed Mutual Exclusion. Kansas State University. Manhattan, Kansas maintains [18]. algorithms [11].

A Dag-Based Algorithm for Distributed Mutual Exclusion. Kansas State University. Manhattan, Kansas maintains [18]. algorithms [11]. A Dag-Based Algorithm for Distributed Mutual Exclusion Mitchell L. Neilsen Masaaki Mizuno Department of Computing and Information Sciences Kansas State University Manhattan, Kansas 66506 Abstract The paper

More information

An Efficient Distributed Mutual Exclusion Algorithm Based on Relative Consensus Voting

An Efficient Distributed Mutual Exclusion Algorithm Based on Relative Consensus Voting An Efficient Distributed Mutual Exclusion Algorithm Based on Relative Consensus Voting Jiannong Cao Hong Kong Polytechnic University Kowloon, Hong Kong csjcao@comp.polyu.edu.hk Jingyang Zhou Nanjing University

More information

A Novel Approach to Allow Process into Critical Section with no delay-time

A Novel Approach to Allow Process into Critical Section with no delay-time A Novel Approach to Allow Process into Critical Section with no delay-time 1 Pradosh Chandra Pattnaik, 2 Md Shafakhatullah Khan, 3 Vsv Mallikharjuna Rao 1,2 Dept of IT & CA, Aurora s Engineering College,

More information

PROCESS SYNCHRONIZATION

PROCESS SYNCHRONIZATION DISTRIBUTED COMPUTER SYSTEMS PROCESS SYNCHRONIZATION Dr. Jack Lange Computer Science Department University of Pittsburgh Fall 2015 Process Synchronization Mutual Exclusion Algorithms Permission Based Centralized

More information

A Token Based Distributed Mutual Exclusion Algorithm based on Quorum Agreements

A Token Based Distributed Mutual Exclusion Algorithm based on Quorum Agreements A Token Based Distributed Mutual Exclusion Algorithm based on Quorum Agreements hlasaaki Mizuno* Mitchell L. Neilsen Raghavra Rao Department of Computing and Information Sciences Kansas State University

More information

Ye-In Chang. National Sun Yat-Sen University. Kaohsiung, Taiwan. Republic of China. Abstract

Ye-In Chang. National Sun Yat-Sen University. Kaohsiung, Taiwan. Republic of China. Abstract A Simulation Study on Distributed Mutual Exclusion 1 Ye-In Chang Dept. of Applied Mathematics National Sun Yat-Sen University Kaohsiung, Taiwan Republic of China fe-mail: changyi@math.nsysu.edu.twg ftel:

More information

Mutual Exclusion. A Centralized Algorithm

Mutual Exclusion. A Centralized Algorithm Mutual Exclusion Processes in a distributed system may need to simultaneously access the same resource Mutual exclusion is required to prevent interference and ensure consistency We will study three algorithms

More information

Homework #2 Nathan Balon CIS 578 October 31, 2004

Homework #2 Nathan Balon CIS 578 October 31, 2004 Homework #2 Nathan Balon CIS 578 October 31, 2004 1 Answer the following questions about the snapshot algorithm: A) What is it used for? It used for capturing the global state of a distributed system.

More information

DISTRIBUTED MUTEX. EE324 Lecture 11

DISTRIBUTED MUTEX. EE324 Lecture 11 DISTRIBUTED MUTEX EE324 Lecture 11 Vector Clocks Vector clocks overcome the shortcoming of Lamport logical clocks L(e) < L(e ) does not imply e happened before e Goal Want ordering that matches causality

More information

Chapter 16: Distributed Synchronization

Chapter 16: Distributed Synchronization Chapter 16: Distributed Synchronization Chapter 16 Distributed Synchronization Event Ordering Mutual Exclusion Atomicity Concurrency Control Deadlock Handling Election Algorithms Reaching Agreement 18.2

More information

殷亚凤. Synchronization. Distributed Systems [6]

殷亚凤. Synchronization. Distributed Systems [6] Synchronization Distributed Systems [6] 殷亚凤 Email: yafeng@nju.edu.cn Homepage: http://cs.nju.edu.cn/yafeng/ Room 301, Building of Computer Science and Technology Review Protocols Remote Procedure Call

More information

Distributed Systems (5DV147)

Distributed Systems (5DV147) Distributed Systems (5DV147) Mutual Exclusion and Elections Fall 2013 1 Processes often need to coordinate their actions Which rocess gets to access a shared resource? Has the master crashed? Elect a new

More information

Distributed Coordination! Distr. Systems: Fundamental Characteristics!

Distributed Coordination! Distr. Systems: Fundamental Characteristics! Distributed Coordination! What makes a system distributed?! Time in a distributed system! How do we determine the global state of a distributed system?! Event ordering! Mutual exclusion! Reading: Silberschatz,

More information

Chapter 18: Distributed

Chapter 18: Distributed Chapter 18: Distributed Synchronization, Silberschatz, Galvin and Gagne 2009 Chapter 18: Distributed Synchronization Event Ordering Mutual Exclusion Atomicity Concurrency Control Deadlock Handling Election

More information

CSE 5306 Distributed Systems. Synchronization

CSE 5306 Distributed Systems. Synchronization CSE 5306 Distributed Systems Synchronization 1 Synchronization An important issue in distributed system is how processes cooperate and synchronize with one another Cooperation is partially supported by

More information

Distributed Operating Systems. Distributed Synchronization

Distributed Operating Systems. Distributed Synchronization 2 Distributed Operating Systems Distributed Synchronization Steve Goddard goddard@cse.unl.edu http://www.cse.unl.edu/~goddard/courses/csce855 1 Synchronization Coordinating processes to achieve common

More information

Remaining Contemplation Questions

Remaining Contemplation Questions Process Synchronisation Remaining Contemplation Questions 1. The first known correct software solution to the critical-section problem for two processes was developed by Dekker. The two processes, P0 and

More information

Chapter 1. Introd uction. Mutual exclusion is one of the first problems met in parallel processmg. The

Chapter 1. Introd uction. Mutual exclusion is one of the first problems met in parallel processmg. The 8 Chapter 1 Introd uction 1.1 Problem Description Mutual exclusion is one of the first problems met in parallel processmg. The implementation of a Mutual Exclusion mechanism is a major task facing every

More information

11/7/2018. Event Ordering. Module 18: Distributed Coordination. Distributed Mutual Exclusion (DME) Implementation of. DME: Centralized Approach

11/7/2018. Event Ordering. Module 18: Distributed Coordination. Distributed Mutual Exclusion (DME) Implementation of. DME: Centralized Approach Module 18: Distributed Coordination Event Ordering Event Ordering Mutual Exclusion Atomicity Concurrency Control Deadlock Handling Election Algorithms Reaching Agreement Happened-before relation (denoted

More information

Silberschatz and Galvin Chapter 18

Silberschatz and Galvin Chapter 18 Silberschatz and Galvin Chapter 18 Distributed Coordination CPSC 410--Richard Furuta 4/21/99 1 Distributed Coordination Synchronization in a distributed environment Ð Event ordering Ð Mutual exclusion

More information

Asynchronous Models. Chapter Asynchronous Processes States, Inputs, and Outputs

Asynchronous Models. Chapter Asynchronous Processes States, Inputs, and Outputs Chapter 3 Asynchronous Models 3.1 Asynchronous Processes Like a synchronous reactive component, an asynchronous process interacts with other processes via inputs and outputs, and maintains an internal

More information

Distributed Systems. Rik Sarkar James Cheney Global State & Distributed Debugging February 3, 2014

Distributed Systems. Rik Sarkar James Cheney Global State & Distributed Debugging February 3, 2014 Distributed Systems Rik Sarkar James Cheney Global State & Distributed Debugging Global State: Consistent Cuts The global state is the combination of all process states and the states of the communication

More information

Distributed Deadlocks. Prof. Ananthanarayana V.S. Dept. of Information Technology N.I.T.K., Surathkal

Distributed Deadlocks. Prof. Ananthanarayana V.S. Dept. of Information Technology N.I.T.K., Surathkal Distributed Deadlocks Prof. Ananthanarayana V.S. Dept. of Information Technology N.I.T.K., Surathkal Objectives of This Module In this module different kind of resources, different kind of resource request

More information

Using Maekawa s Algorithm to Perform Distributed Mutual Exclusion in Quorums

Using Maekawa s Algorithm to Perform Distributed Mutual Exclusion in Quorums Advances in Computing 2012, 2(4): 54-59 DOI: 10.5923/j.ac.20120204.02 Using Maekawa s Algorithm to Perform Distributed Mutual Exclusion in Quorums Ousmane Thiare 1,*, Papa Alioune Fall 2 1 Department of

More information

A Dynamic Resource Synchronizer Mutual Exclusion Algorithm for Wired/Wireless Distributed Systems

A Dynamic Resource Synchronizer Mutual Exclusion Algorithm for Wired/Wireless Distributed Systems American Journal of Applied Sciences (7): 89-8, 008 IN 6-99 008 Science Publications A Dynamic Resource Synchronizer utual Exclusion Algorithm for Wired/Wireless Distributed Systems Ahmed Sharieh, ariam

More information

Event Ordering Silberschatz, Galvin and Gagne. Operating System Concepts

Event Ordering Silberschatz, Galvin and Gagne. Operating System Concepts Event Ordering Happened-before relation (denoted by ) If A and B are events in the same process, and A was executed before B, then A B If A is the event of sending a message by one process and B is the

More information

Distributed Systems Coordination and Agreement

Distributed Systems Coordination and Agreement Distributed Systems Coordination and Agreement Allan Clark School of Informatics University of Edinburgh http://www.inf.ed.ac.uk/teaching/courses/ds Autumn Term 2012 Coordination and Agreement Overview

More information

CSL373: Lecture 5 Deadlocks (no process runnable) + Scheduling (> 1 process runnable)

CSL373: Lecture 5 Deadlocks (no process runnable) + Scheduling (> 1 process runnable) CSL373: Lecture 5 Deadlocks (no process runnable) + Scheduling (> 1 process runnable) Past & Present Have looked at two constraints: Mutual exclusion constraint between two events is a requirement that

More information

Distributed Fault-Tolerant Channel Allocation for Cellular Networks

Distributed Fault-Tolerant Channel Allocation for Cellular Networks 1326 IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 18, NO. 7, JULY 2000 Distributed Fault-Tolerant Channel Allocation for Cellular Networks Guohong Cao, Associate Member, IEEE, and Mukesh Singhal,

More information

2. Time and Global States Page 1. University of Freiburg, Germany Department of Computer Science. Distributed Systems

2. Time and Global States Page 1. University of Freiburg, Germany Department of Computer Science. Distributed Systems 2. Time and Global States Page 1 University of Freiburg, Germany Department of Computer Science Distributed Systems Chapter 3 Time and Global States Christian Schindelhauer 12. May 2014 2. Time and Global

More information

Mutual Exclusion in DS

Mutual Exclusion in DS Mutual Exclusion in DS Event Ordering Mutual Exclusion Election Algorithms Reaching Agreement Event Ordering Happened-before relation (denoted by ). If A and B are events in the same process, and A was

More information

Distributed Systems. Lec 9: Distributed File Systems NFS, AFS. Slide acks: Dave Andersen

Distributed Systems. Lec 9: Distributed File Systems NFS, AFS. Slide acks: Dave Andersen Distributed Systems Lec 9: Distributed File Systems NFS, AFS Slide acks: Dave Andersen (http://www.cs.cmu.edu/~dga/15-440/f10/lectures/08-distfs1.pdf) 1 VFS and FUSE Primer Some have asked for some background

More information

Synchronization Part II. CS403/534 Distributed Systems Erkay Savas Sabanci University

Synchronization Part II. CS403/534 Distributed Systems Erkay Savas Sabanci University Synchronization Part II CS403/534 Distributed Systems Erkay Savas Sabanci University 1 Election Algorithms Issue: Many distributed algorithms require that one process act as a coordinator (initiator, etc).

More information

Verteilte Systeme/Distributed Systems Ch. 5: Various distributed algorithms

Verteilte Systeme/Distributed Systems Ch. 5: Various distributed algorithms Verteilte Systeme/Distributed Systems Ch. 5: Various distributed algorithms Holger Karl Computer Networks Group Universität Paderborn Goal of this chapter Apart from issues in distributed time and resulting

More information

Algorithms and protocols for distributed systems

Algorithms and protocols for distributed systems Algorithms and protocols for distributed systems We have defined process groups as having peer or hierarchical structure and have seen that a coordinator may be needed to run a protocol such as 2PC. With

More information

Mutual Exclusion. 1 Formal problem definitions. Time notion CSE /17/2015. Outline of this lecture:

Mutual Exclusion. 1 Formal problem definitions. Time notion CSE /17/2015. Outline of this lecture: CSE 539 03/17/2015 Mutual Exclusion Lecture 15 Scribe: Son Dinh Outline of this lecture: 1. Formal problem definitions 2. Solution for 2 threads 3. Solution for n threads 4. Inherent costs of mutual exclusion

More information

CSE 5306 Distributed Systems

CSE 5306 Distributed Systems CSE 5306 Distributed Systems Synchronization Jia Rao http://ranger.uta.edu/~jrao/ 1 Synchronization An important issue in distributed system is how process cooperate and synchronize with one another Cooperation

More information

Last Class: Clock Synchronization. Today: More Canonical Problems

Last Class: Clock Synchronization. Today: More Canonical Problems Last Class: Clock Synchronization Logical clocks Vector clocks Global state Lecture 12, page 1 Today: More Canonical Problems Distributed snapshot and termination detection Election algorithms Bully algorithm

More information

Local Coteries and a Distributed Resource Allocation Algorithm

Local Coteries and a Distributed Resource Allocation Algorithm Vol. 37 No. 8 Transactions of Information Processing Society of Japan Aug. 1996 Regular Paper Local Coteries and a Distributed Resource Allocation Algorithm Hirotsugu Kakugawa and Masafumi Yamashita In

More information

Concurrency: Mutual Exclusion and Synchronization

Concurrency: Mutual Exclusion and Synchronization Concurrency: Mutual Exclusion and Synchronization 1 Needs of Processes Allocation of processor time Allocation and sharing resources Communication among processes Synchronization of multiple processes

More information

CS370 Operating Systems

CS370 Operating Systems CS370 Operating Systems Colorado State University Yashwant K Malaiya Fall 2017 Lecture 11 Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 FAQ Multilevel Feedback Queue: Q0, Q1,

More information

EXTENDING THE PRIORITY CEILING PROTOCOL USING READ/WRITE AFFECTED SETS MICHAEL A. SQUADRITO A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE

EXTENDING THE PRIORITY CEILING PROTOCOL USING READ/WRITE AFFECTED SETS MICHAEL A. SQUADRITO A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE EXTENDING THE PRIORITY CEILING PROTOCOL USING READ/WRITE AFFECTED SETS BY MICHAEL A. SQUADRITO A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE IN COMPUTER

More information

Concurrency, Mutual Exclusion and Synchronization C H A P T E R 5

Concurrency, Mutual Exclusion and Synchronization C H A P T E R 5 Concurrency, Mutual Exclusion and Synchronization C H A P T E R 5 Multiple Processes OS design is concerned with the management of processes and threads: Multiprogramming Multiprocessing Distributed processing

More information

An algorithm for Performance Analysis of Single-Source Acyclic graphs

An algorithm for Performance Analysis of Single-Source Acyclic graphs An algorithm for Performance Analysis of Single-Source Acyclic graphs Gabriele Mencagli September 26, 2011 In this document we face with the problem of exploiting the performance analysis of acyclic graphs

More information

Synchronization. Chapter 5

Synchronization. Chapter 5 Synchronization Chapter 5 Clock Synchronization In a centralized system time is unambiguous. (each computer has its own clock) In a distributed system achieving agreement on time is not trivial. (it is

More information

Distributed Deadlock

Distributed Deadlock Distributed Deadlock 9.55 DS Deadlock Topics Prevention Too expensive in time and network traffic in a distributed system Avoidance Determining safe and unsafe states would require a huge number of messages

More information

Advanced Topics in Distributed Systems. Dr. Ayman A. Abdel-Hamid. Computer Science Department Virginia Tech

Advanced Topics in Distributed Systems. Dr. Ayman A. Abdel-Hamid. Computer Science Department Virginia Tech Advanced Topics in Distributed Systems Dr. Ayman A. Abdel-Hamid Computer Science Department Virginia Tech Synchronization (Based on Ch6 in Distributed Systems: Principles and Paradigms, 2/E) Synchronization

More information

Distributed Synchronization. EECS 591 Farnam Jahanian University of Michigan

Distributed Synchronization. EECS 591 Farnam Jahanian University of Michigan Distributed Synchronization EECS 591 Farnam Jahanian University of Michigan Reading List Tanenbaum Chapter 5.1, 5.4 and 5.5 Clock Synchronization Distributed Election Mutual Exclusion Clock Synchronization

More information

Module 11. Directed Graphs. Contents

Module 11. Directed Graphs. Contents Module 11 Directed Graphs Contents 11.1 Basic concepts......................... 256 Underlying graph of a digraph................ 257 Out-degrees and in-degrees.................. 258 Isomorphism..........................

More information

The Information Structure of Distributed Mutual Exclusion Algorithms

The Information Structure of Distributed Mutual Exclusion Algorithms The Information Structure of Distributed Mutual Exclusion Algorithms BEVERLY A. SANDERS University of Maryland The concept of an information structure is introduced as a unifying principle behind several

More information

Concurrency: Mutual Exclusion and

Concurrency: Mutual Exclusion and Concurrency: Mutual Exclusion and Synchronization 1 Needs of Processes Allocation of processor time Allocation and sharing resources Communication among processes Synchronization of multiple processes

More information

A Two-Layer Hybrid Algorithm for Achieving Mutual Exclusion in Distributed Systems

A Two-Layer Hybrid Algorithm for Achieving Mutual Exclusion in Distributed Systems A Two-Layer Hybrid Algorithm for Achieving Mutual Exclusion in Distributed Systems QUAZI EHSANUL KABIR MAMUN *, MORTUZA ALI *, SALAHUDDIN MOHAMMAD MASUM, MOHAMMAD ABDUR RAHIM MUSTAFA * Dept. of CSE, Bangladesh

More information

Interprocess Communication By: Kaushik Vaghani

Interprocess Communication By: Kaushik Vaghani Interprocess Communication By: Kaushik Vaghani Background Race Condition: A situation where several processes access and manipulate the same data concurrently and the outcome of execution depends on the

More information

A Theory of Redo Recovery

A Theory of Redo Recovery A Theory of Redo Recovery David Lomet Microsoft Research Redmond, WA 98052 lomet@microsoft.com Mark Tuttle HP Labs Cambridge Cambridge, MA 02142 mark.tuttle@hp.com ABSTRACT Our goal is to understand redo

More information

A can be implemented as a separate process to which transactions send lock and unlock requests The lock manager replies to a lock request by sending a lock grant messages (or a message asking the transaction

More information

General Objectives: To understand the process management in operating system. Specific Objectives: At the end of the unit you should be able to:

General Objectives: To understand the process management in operating system. Specific Objectives: At the end of the unit you should be able to: F2007/Unit5/1 UNIT 5 OBJECTIVES General Objectives: To understand the process management in operating system Specific Objectives: At the end of the unit you should be able to: define program, process and

More information

Part II Process Management Chapter 6: Process Synchronization

Part II Process Management Chapter 6: Process Synchronization Part II Process Management Chapter 6: Process Synchronization 1 Process Synchronization Why is synchronization needed? Race Conditions Critical Sections Pure Software Solutions Hardware Support Semaphores

More information

CS 347 Parallel and Distributed Data Processing

CS 347 Parallel and Distributed Data Processing CS 347 Parallel and Distributed Data Processing Spring 2016 Notes 5: Concurrency Control Topics Data Database design Queries Decomposition Localization Optimization Transactions Concurrency control Reliability

More information

Time and Coordination in Distributed Systems. Operating Systems

Time and Coordination in Distributed Systems. Operating Systems Time and Coordination in Distributed Systems Oerating Systems Clock Synchronization Physical clocks drift, therefore need for clock synchronization algorithms Many algorithms deend uon clock synchronization

More information

arxiv: v8 [cs.dc] 29 Jul 2011

arxiv: v8 [cs.dc] 29 Jul 2011 On the Cost of Concurrency in Transactional Memory Petr Kuznetsov TU Berlin/Deutsche Telekom Laboratories Srivatsan Ravi TU Berlin/Deutsche Telekom Laboratories arxiv:1103.1302v8 [cs.dc] 29 Jul 2011 Abstract

More information

Models of concurrency & synchronization algorithms

Models of concurrency & synchronization algorithms Models of concurrency & synchronization algorithms Lecture 3 of TDA383/DIT390 (Concurrent Programming) Carlo A. Furia Chalmers University of Technology University of Gothenburg SP3 2016/2017 Today s menu

More information

Chapter 3. Set Theory. 3.1 What is a Set?

Chapter 3. Set Theory. 3.1 What is a Set? Chapter 3 Set Theory 3.1 What is a Set? A set is a well-defined collection of objects called elements or members of the set. Here, well-defined means accurately and unambiguously stated or described. Any

More information

Synchronization Part 2. REK s adaptation of Claypool s adaptation oftanenbaum s Distributed Systems Chapter 5 and Silberschatz Chapter 17

Synchronization Part 2. REK s adaptation of Claypool s adaptation oftanenbaum s Distributed Systems Chapter 5 and Silberschatz Chapter 17 Synchronization Part 2 REK s adaptation of Claypool s adaptation oftanenbaum s Distributed Systems Chapter 5 and Silberschatz Chapter 17 1 Outline Part 2! Clock Synchronization! Clock Synchronization Algorithms!

More information

PROCESS SYNCHRONIZATION

PROCESS SYNCHRONIZATION PROCESS SYNCHRONIZATION Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Semaphores Classic Problems of Synchronization Monitors Synchronization

More information

CS 361 Concurrent programming Drexel University Fall 2004 Lecture 8. Proof by contradiction. Proof of correctness. Proof of mutual exclusion property

CS 361 Concurrent programming Drexel University Fall 2004 Lecture 8. Proof by contradiction. Proof of correctness. Proof of mutual exclusion property CS 361 Concurrent programming Drexel University Fall 2004 Lecture 8 Bruce Char and Vera Zaychik. All rights reserved by the author. Permission is given to students enrolled in CS361 Fall 2004 to reproduce

More information

Distributed Systems 8L for Part IB

Distributed Systems 8L for Part IB Distributed Systems 8L for Part IB Handout 3 Dr. Steven Hand 1 Distributed Mutual Exclusion In first part of course, saw need to coordinate concurrent processes / threads In particular considered how to

More information

Self Stabilization. CS553 Distributed Algorithms Prof. Ajay Kshemkalyani. by Islam Ismailov & Mohamed M. Ali

Self Stabilization. CS553 Distributed Algorithms Prof. Ajay Kshemkalyani. by Islam Ismailov & Mohamed M. Ali Self Stabilization CS553 Distributed Algorithms Prof. Ajay Kshemkalyani by Islam Ismailov & Mohamed M. Ali Introduction There is a possibility for a distributed system to go into an illegitimate state,

More information

Response Time Analysis of Asynchronous Real-Time Systems

Response Time Analysis of Asynchronous Real-Time Systems Response Time Analysis of Asynchronous Real-Time Systems Guillem Bernat Real-Time Systems Research Group Department of Computer Science University of York York, YO10 5DD, UK Technical Report: YCS-2002-340

More information

CS 571 Operating Systems. Midterm Review. Angelos Stavrou, George Mason University

CS 571 Operating Systems. Midterm Review. Angelos Stavrou, George Mason University CS 571 Operating Systems Midterm Review Angelos Stavrou, George Mason University Class Midterm: Grading 2 Grading Midterm: 25% Theory Part 60% (1h 30m) Programming Part 40% (1h) Theory Part (Closed Books):

More information

Implementing Isolation

Implementing Isolation CMPUT 391 Database Management Systems Implementing Isolation Textbook: 20 & 21.1 (first edition: 23 & 24.1) University of Alberta 1 Isolation Serial execution: Since each transaction is consistent and

More information

1. Consider the following page reference string: 1, 2, 3, 4, 2, 1, 5, 6, 2, 1, 2, 3, 7, 6, 3, 2, 1, 2, 3, 6.

1. Consider the following page reference string: 1, 2, 3, 4, 2, 1, 5, 6, 2, 1, 2, 3, 7, 6, 3, 2, 1, 2, 3, 6. 1. Consider the following page reference string: 1, 2, 3, 4, 2, 1, 5, 6, 2, 1, 2, 3, 7, 6, 3, 2, 1, 2, 3, 6. What will be the ratio of page faults for the following replacement algorithms - FIFO replacement

More information

CS 347 Parallel and Distributed Data Processing

CS 347 Parallel and Distributed Data Processing CS 347 Parallel and Distributed Data Processing Spring 2016 Notes 5: Concurrency Control Topics Data Database design Queries Decomposition Localization Optimization Transactions Concurrency control Reliability

More information

Distributed minimum spanning tree problem

Distributed minimum spanning tree problem Distributed minimum spanning tree problem Juho-Kustaa Kangas 24th November 2012 Abstract Given a connected weighted undirected graph, the minimum spanning tree problem asks for a spanning subtree with

More information

Mutual Exclusion: Classical Algorithms for Locks

Mutual Exclusion: Classical Algorithms for Locks Mutual Exclusion: Classical Algorithms for Locks John Mellor-Crummey Department of Computer Science Rice University johnmc@cs.rice.edu COMP 422 Lecture 18 21 March 2006 Motivation Ensure that a block of

More information

Concurrent Objects and Linearizability

Concurrent Objects and Linearizability Chapter 3 Concurrent Objects and Linearizability 3.1 Specifying Objects An object in languages such as Java and C++ is a container for data. Each object provides a set of methods that are the only way

More information

Concurrency Control Algorithms

Concurrency Control Algorithms Concurrency Control Algorithms Given a number of conflicting transactions, the serializability theory provides criteria to study the correctness of a possible schedule of execution it does not provide

More information

Operating Systems. Mutual Exclusion in Distributed Systems. Lecturer: William Fornaciari

Operating Systems. Mutual Exclusion in Distributed Systems. Lecturer: William Fornaciari Politecnico di Milano Operating Systems Mutual Exclusion in Distributed Systems Lecturer: William Fornaciari Politecnico di Milano fornacia@elet.polimi.it www.elet.polimi.it/~fornacia William Fornaciari

More information

How Bitcoin achieves Decentralization. How Bitcoin achieves Decentralization

How Bitcoin achieves Decentralization. How Bitcoin achieves Decentralization Centralization vs. Decentralization Distributed Consensus Consensus without Identity, using a Block Chain Incentives and Proof of Work Putting it all together Centralization vs. Decentralization Distributed

More information

CSC Operating Systems Spring Lecture - XII Midterm Review. Tevfik Ko!ar. Louisiana State University. March 4 th, 2008.

CSC Operating Systems Spring Lecture - XII Midterm Review. Tevfik Ko!ar. Louisiana State University. March 4 th, 2008. CSC 4103 - Operating Systems Spring 2008 Lecture - XII Midterm Review Tevfik Ko!ar Louisiana State University March 4 th, 2008 1 I/O Structure After I/O starts, control returns to user program only upon

More information

Lecture 6: Logical Time

Lecture 6: Logical Time Lecture 6: Logical Time 1. Question from reviews a. 2. Key problem: how do you keep track of the order of events. a. Examples: did a file get deleted before or after I ran that program? b. Did this computers

More information

Subject Name: OPERATING SYSTEMS. Subject Code: 10EC65. Prepared By: Kala H S and Remya R. Department: ECE. Date:

Subject Name: OPERATING SYSTEMS. Subject Code: 10EC65. Prepared By: Kala H S and Remya R. Department: ECE. Date: Subject Name: OPERATING SYSTEMS Subject Code: 10EC65 Prepared By: Kala H S and Remya R Department: ECE Date: Unit 7 SCHEDULING TOPICS TO BE COVERED Preliminaries Non-preemptive scheduling policies Preemptive

More information

Dr. D. M. Akbar Hussain DE5 Department of Electronic Systems

Dr. D. M. Akbar Hussain DE5 Department of Electronic Systems Concurrency 1 Concurrency Execution of multiple processes. Multi-programming: Management of multiple processes within a uni- processor system, every system has this support, whether big, small or complex.

More information

Distributed Systems Principles and Paradigms

Distributed Systems Principles and Paradigms Distributed Systems Principles and Paradigms Chapter 05 (version 16th May 2006) Maarten van Steen Vrije Universiteit Amsterdam, Faculty of Science Dept. Mathematics and Computer Science Room R4.20. Tel:

More information

Chapter 6: Process Synchronization

Chapter 6: Process Synchronization Chapter 6: Process Synchronization Objectives Introduce Concept of Critical-Section Problem Hardware and Software Solutions of Critical-Section Problem Concept of Atomic Transaction Operating Systems CS

More information

Byzantine Consensus in Directed Graphs

Byzantine Consensus in Directed Graphs Byzantine Consensus in Directed Graphs Lewis Tseng 1,3, and Nitin Vaidya 2,3 1 Department of Computer Science, 2 Department of Electrical and Computer Engineering, and 3 Coordinated Science Laboratory

More information

(Pessimistic) Timestamp Ordering. Rules for read and write Operations. Read Operations and Timestamps. Write Operations and Timestamps

(Pessimistic) Timestamp Ordering. Rules for read and write Operations. Read Operations and Timestamps. Write Operations and Timestamps (Pessimistic) stamp Ordering Another approach to concurrency control: Assign a timestamp ts(t) to transaction T at the moment it starts Using Lamport's timestamps: total order is given. In distributed

More information