A Dag-Based Algorithm for Distributed Mutual Exclusion. Kansas State University. Manhattan, Kansas maintains [18]. algorithms [11].

Similar documents
A DAG-BASED ALGORITHM FOR DISTRIBUTED MUTUAL EXCLUSION ATHESIS MASTER OF SCIENCE

A Token Based Distributed Mutual Exclusion Algorithm based on Quorum Agreements

Ye-In Chang. National Sun Yat-Sen University. Kaohsiung, Taiwan. Republic of China. Abstract

Coordination and Agreement

Distributed Mutual Exclusion

Chapter 1. Introd uction. Mutual exclusion is one of the first problems met in parallel processmg. The

Several of these problems are motivated by trying to use solutiions used in `centralized computing to distributed computing

International Journal of Computer Engineering and Applications, Volume XII, Issue I, Jan. 18, ISSN

Distributed Mutual Exclusion Algorithms

An Efficient Distributed Mutual Exclusion Algorithm Based on Relative Consensus Voting

A Synchronization Algorithm for Distributed Systems

A Dynamic Distributed Algorithm for Read Write Locks

A Delay-Optimal Group Mutual Exclusion Algorithm for a Tree Network

The Prioritized and Distributed Synchronization in Distributed Groups

A Two-Layer Hybrid Algorithm for Achieving Mutual Exclusion in Distributed Systems

Using Maekawa s Algorithm to Perform Distributed Mutual Exclusion in Quorums

Coordination 1. To do. Mutual exclusion Election algorithms Next time: Global state. q q q

CSE 486/586 Distributed Systems

A Dynamic Token-Based Distributed Mutual Exclusion Algorithm

Algorithm 23 works. Instead of a spanning tree, one can use routing.

Mutual Exclusion. A Centralized Algorithm

Distributed Systems. coordination Johan Montelius ID2201. Distributed Systems ID2201

A Novel Approach to Allow Process into Critical Section with no delay-time

A Distributed Algorithm for Power Management in Mobile Ad-Hoc Networks

A Mutual Exclusion Algorithm for Ad Hoc Mobile Networks. Texas A&M University. fjennyw, welch,

A Dynamic Resource Synchronizer Mutual Exclusion Algorithm for Wired/Wireless Distributed Systems

Self Stabilization. CS553 Distributed Algorithms Prof. Ajay Kshemkalyani. by Islam Ismailov & Mohamed M. Ali

21. Distributed Algorithms

Frequently asked questions from the previous class survey

CS455: Introduction to Distributed Systems [Spring 2018] Dept. Of Computer Science, Colorado State University

A simple correctness proof of the MCS contention-free lock. Theodore Johnson. Krishna Harathi. University of Florida. Abstract

QUESTIONS Distributed Computing Systems. Prof. Ananthanarayana V.S. Dept. Of Information Technology N.I.T.K., Surathkal

PROCESS SYNCHRONIZATION

Distributed Mutual Exclusion

Verteilte Systeme/Distributed Systems Ch. 5: Various distributed algorithms

Lock Based Distributed Data Structures

A Mutual Exclusion Algorithm for Ad Hoc Mobile Networks

primitives. Ad Hoc Network (a) User Applications Distributed Primitives Routing Protocol Ad Hoc Network (b)

A Timestamp Based Transformation of. Computing Environments. real distributed computing environment; therefore, programs developed

Availability of Coding Based Replication Schemes. Gagan Agrawal. University of Maryland. College Park, MD 20742

CS 455: INTRODUCTION TO DISTRIBUTED SYSTEMS [DISTRIBUTED MUTUAL EXCLUSION] Frequently asked questions from the previous class survey

Distributed Systems Coordination and Agreement

CS455: Introduction to Distributed Systems [Spring 2018] Dept. Of Computer Science, Colorado State University

Distributed Systems Leader election & Failure detection

Reducing synchronization cost in distributed multi-resource allocation problem

The Encoding Complexity of Network Coding

Self-Stabilizing Distributed Queuing

Distributed Systems (5DV147)

Shared Objects. Chapter Introduction

Local Coteries and a Distributed Resource Allocation Algorithm

Time and Coordination in Distributed Systems. Operating Systems

16 Greedy Algorithms

where is a constant, 0 < <. In other words, the ratio between the shortest and longest paths from a node to a leaf is at least. An BB-tree allows ecie

Local Stabilizer. Yehuda Afek y Shlomi Dolev z. Abstract. A local stabilizer protocol that takes any on-line or o-line distributed algorithm and

In this paper we consider probabilistic algorithms for that task. Each processor is equipped with a perfect source of randomness, and the processor's

The Information Structure of Distributed Mutual Exclusion Algorithms

An Optimal Voting Scheme for Minimizing the. Overall Communication Cost in Replicated Data. December 14, Abstract

A taxonomy of race. D. P. Helmbold, C. E. McDowell. September 28, University of California, Santa Cruz. Santa Cruz, CA

Mutual Exclusion. 1 Formal problem definitions. Time notion CSE /17/2015. Outline of this lecture:

Exam 2 Review. Fall 2011

Mutual Exclusion Between Neighboring Nodes in a Tree That Stabilizes Using Read/Write Atomicity?

Site 1 Site 2 Site 3. w1[x] pos ack(c1) pos ack(c1) w2[x] neg ack(c2)

The Postal Network: A Versatile Interconnection Topology

Localization in Graphs. Richardson, TX Azriel Rosenfeld. Center for Automation Research. College Park, MD

III Data Structures. Dynamic sets

Homework #2 Nathan Balon CIS 578 October 31, 2004

The 3-Steiner Root Problem

Henning Koch. Dept. of Computer Science. University of Darmstadt. Alexanderstr. 10. D Darmstadt. Germany. Keywords:

Last Class: Clock Synchronization. Today: More Canonical Problems

Abstract Relaxed balancing of search trees was introduced with the aim of speeding up the updates and allowing a high degree of concurrency. In a rela

would be included in is small: to be exact. Thus with probability1, the same partition n+1 n+1 would be produced regardless of whether p is in the inp

Lecture 3: Graphs and flows

Distributed Algorithms 6.046J, Spring, 2015 Part 2. Nancy Lynch

Two-Phase Atomic Commitment Protocol in Asynchronous Distributed Systems with Crash Failure

Constructions of hamiltonian graphs with bounded degree and diameter O(log n)

ICS 691: Advanced Data Structures Spring Lecture 3

Distributed Algorithms for Detecting Conjunctive Predicates. The University of Texas at Austin, September 30, Abstract

Lecture 9 - Matrix Multiplication Equivalences and Spectral Graph Theory 1

Definition For vertices u, v V (G), the distance from u to v, denoted d(u, v), in G is the length of a shortest u, v-path. 1

CS2 Algorithms and Data Structures Note 10. Depth-First Search and Topological Sorting

These notes present some properties of chordal graphs, a set of undirected graphs that are important for undirected graphical models.

Properties of red-black trees

On the Complexity of the Policy Improvement Algorithm. for Markov Decision Processes

Chapter 16: Distributed Synchronization

arxiv: v2 [cs.ds] 22 Jun 2016

State-Optimal Snap-Stabilizing PIF In Tree Networks

THE FIRST APPROXIMATED DISTRIBUTED ALGORITHM FOR THE MINIMUM DEGREE SPANNING TREE PROBLEM ON GENERAL GRAPHS. and

Algorithms for COOPERATIVE DS: Leader Election in the MPS model

Concurrency: Mutual Exclusion and Synchronization

Chapter 18: Distributed

A family of optimal termination detection algorithms

6.852: Distributed Algorithms Fall, Class 12

Figure 1: An example of a hypercube 1: Given that the source and destination addresses are n-bit vectors, consider the following simple choice of rout

Process Synchroniztion Mutual Exclusion & Election Algorithms

Recap: Consensus. CSE 486/586 Distributed Systems Mutual Exclusion. Why Mutual Exclusion? Why Mutual Exclusion? Mutexes. Mutual Exclusion C 1

The temporal explorer who returns to the base 1

Lecture 22 Tuesday, April 10

The Feasible Solution Space for Steiner Trees

Valmir C. Barbosa. Programa de Engenharia de Sistemas e Computac~ao, COPPE. Caixa Postal Abstract

Reference Sheet for CO142.2 Discrete Mathematics II

Transcription:

A Dag-Based Algorithm for Distributed Mutual Exclusion Mitchell L. Neilsen Masaaki Mizuno Department of Computing and Information Sciences Kansas State University Manhattan, Kansas 66506 Abstract The paper presents a token based distributed mutual exclusion algorithm. The algorithm assumes a fully connected, reliable physical network and a directed acyclic graph (dag) structured logical network. The number of messages required to provide mutual exclusion is dependent upon the logical topology imposed on the nodes. Using the best topology, the algorithm attains comparable performance to a centralized mutual exclusion algorithm; i.e., three messages per critical section entry. Also, the algorithm achieves minimal heavy-load synchronization delay and imposes very little storage overhead. 1 Introduction Many distributed mutual exclusion algorithms have been proposed [1, 2, 3, 5, 6, 7, 8, 9, 11, 12, 13, 14, 16, 17, 18, 19, 20, 21, 22, 23]. These algorithms can be classi- ed into two groups [15, 20]. The algorithms in the rst group are permission based [1, 3, 6, 7, 14, 16, 18, 19]. A node enters the critical section only after receiving permission from a set of nodes. The algorithms in the second group are token based [2, 5, 8, 9, 11, 12, 17, 20, 21, 22, 23]. The possession of a system-wide unique token gives a node the right to enter the critical section. Lamport proposed one of the rst distributed mutual exclusion algorithms [6]. The algorithm is permission based and requires 3 (N? 1) messages to provide mutual exclusion. Another permission based algorithm, proposed by Ricart and Agrawala, reduced the number of required messages to 2 (N? 1) messages per critical section entry [16]. By using a dierent denition of symmetry, Carvalho and Roucairol were able to reduce This work was supported in part by the National Science Foundation under Grant CCR-8822378. the number of messages to be between 0 and 2(N?1) [3]. Maekawa proposed a permission based algorithm in which the number of messages required is O( p N) [7]. Sanders generalized permission based algorithms by dening the information structure which each node maintains [18]. In response to Carvalho and Roucairol's algorithm, Ricart and Agrawala proposed a token based algorithm [17] which is essentially the same as Suzuki and Kasami's algorithm [21]. Based on Suzuki and Kasami's algorithm, Singhal proposed a heuristicallyaided algorithm that uses state information to more accurately guess the location of the token [20]. The maximum number of messages required by these three algorithms is N. Recently, Nishio, et. al., proposed a failure recovery algorithm which can be used with these algorithms [11]. By imposing a tree-based logical structure on the nodes, another class of token based algorithms has been obtained. In this class of algorithms, all of the nodes, except for the root node, are on a path to the root node (a sink node) in the logical structure. The logical structure determines the path along which a request message travels. There are two dierent types of logical structures: dynamic and static. For node i, dene NEIGHBOR i to be the set of nodes j such that there is an edge from node i to node j or from node j to node i. In a dynamic logical structure, NEIGHBOR i changes from time to time, at each node i. On the other hand, in a static logical structure, NEIGHBOR i is constant at every node i in the system. An algorithm, based on a dynamic logical structure, was proposed by Trehel and Naimi [22]. A similar algorithm was proposed by Bernabeu-Auban and Ahamad [2]. The basic notion underlying these algorithms is path reversal. A path reversal at a node x is performed as the request by node x travels along the path from node x to the root node. As the request travels,

node x becomes the new parent of each node on the path, except for node x. Thus, node x becomes the new root node. A complete analysis of path reversal has been given by Ginat [4]. The average number of messages required per critical section is O(log(N)). If a static logical structure is used, the basic notion underlying the algorithm is what we will call edge reversal. An edge reversal at a node x is performed as the request by node x travels along the path from node x to the root node. As the request travels, the direction of each edge on the path is changed to point towards node x. However, the shape of the logical structure never changes. Suprisingly, this small change results in algorithms which have a small xed upper bound on the number of messages required per critical section entry, and the upper bound only depends on the logical structure. One such algorithm was proposed by Raymond [12]. The algorithm assumes that the static logical structure is an unrooted tree. If the radiating star topology is used, the average number of messages required is O(logN). However, this is not the optimal topology, as we shall show. A similar algorithm was proposed by van de Snepscheut [23]. This paper presents a token based algorithm which further improves Raymond's tree-based approach. The algorithm assumes a fully connected physical network and a static, directed acyclic graph (dag) structured logical network. It reduces the number of required messages to roughly half the number required in Raymond's algorithm. The best logical topology, in terms of the maximum number of messages required per critical section, is the star topology. Using the star topology, the maximum number of messages required is three, which is the same performance exhibited by centralized schemes. Furthermore, the heavy-load synchronization delay (the maximum number of sequential messages required between one node leaving the critical section and another waiting node entering the critical section) is minimal, i.e., one message. The light-load synchronization delay (the maximum number of sequential messages required for a critical section entry assuming no pending requests exist when a request is issued) is the diameter of the logical topology (i.e., the length of the longest path in the topology) plus one. A node or a token does not need to maintain a queue of outstanding requests for mutual exclusion. Instead, the queue is maintained implicitly in a distributed manner and may be deduced by observing the states of the nodes. The algorithm requires very simple data structures; each node maintains a few simple variables, and the token carries no data structure. Section 2 informally describes the algorithm by using a simple example. The detailed algorithm is presented in Section 3. Section 4 presents proofs of correctness with respect to guaranteed mutual exclusion, deadlock freedom, and starvation freedom. Section 5 analyzes the performance of the algorithm. 2 Overview of the algorithm We assume that the system consists of N nodes, which are uniquely numbered from 1 to N. At any given time, each node can have at most one outstanding request to enter the critical section. Physically, the nodes are fully connected by a reliable network. Logically, they are arranged in a dag structure with only one sink node. The out degree of each node is at most one. We further impose that the structure of the graph is acyclic even without considering the directions of the edges. Two types of messages, REQUEST and PRIVI- LEGE, are exchanged among nodes. When a node requests to enter the critical section, it initiates a RE- QUEST message. A PRIVILEGE message represents the token; when a node receives a PRIVILEGE message, it can enter the critical section. Each node maintains three simple variables: integer variables LAST and NEXT, and a boolean variable HOLDING. The logical dag structure indicates the path along which a REQUEST message travels and is imposed by the LAST variables in the nodes. When a node initiates or receives a REQUEST message, the node forwards the request to the neighboring node pointed at by its LAST variable (unless the node is a sink, in which case its LAST variable is 0; this case will be explained below). The NEXT variable indicates the node which will be granted mutual exclusion after this node. If the node is currently the last node to be granted mutual exclusion, its NEXT variable is 0. Thus, by following the NEXT variables from the token holder to the node whose NEXT variable is 0, the implicit waiting queue of the system can be deduced. When a node leaves the critical section, it forwards the token to the node at the top of the implicit waiting queue and performs a dequeue operation. That is, it sends a PRIVILEGE message to the node indicated by its NEXT variable (this is the node at the top of the queue) and clears the variable (this corresponds to the dequeue operation), unless NEXT is 0. If NEXT is 0, the node continues to hold the token. This case is explained below.

Semantically, a sink node in the system is (1) the last node in the implicit waiting queue (i.e., its NEXT variable is 0), and (2) the last node in the path along which a request travels (i.e., its LAST variable is 0). When a sink node receives a REQUEST message, it enqueues the request into the implicit waiting queue and becomes a non-sink. The node initiating the request becomes the new sink since it is now the last node in the queue. Each edge in the path must change direction to point in the direction of the new sink. This is done by the nodes along the path in a distributed manner as follows: When a node initiates a new REQUEST message, it forwards the message to its neighboring node indicated by its LAST variable and sets its LAST variable to 0 to become a new sink. It remains a sink until it receives a subsequent request. When an intermediate (non-sink) node receives a REQUEST message from a neighboring node X, it passes the message to the neighboring node indicated by its LAST variable. Then, the node sets its LAST variable to X. Thus, if it receives another request later, it forwards the request in the direction of the new sink. In Trehel and Naimi's algorithm, the LAST variable is set to the node that initiated the request and not to the neighboring node on the path to the node that initiated the request. When a sink node receives a REQUEST message from a node X, it sets its NEXT variable to the identier of the node initiating the request. This corresponds to an enqueue operation. The node also sets its LAST variable to X to enter the path in the direction of the new sink. Note that if a sink node holds the token, but is not in the critical section (this state is indicated by a boolean variable HOLDING) when it receives a request, it immediately forwards the token to the node initiating the request. Because of message delay, there are several sink nodes in the system while requests are in transit. Assume that two nodes, X and Y, initiate requests at about the same time. There may be at most three sink nodes while the requests are in transit: node X, node Y, and the current sink node. The current sink node becomes a non-sink when it receives one of the requests (assume it receives a request from node X). Node X becomes a non-sink when it receives the request from node Y. Eventually, node Y becomes the only sink node in the system. The system is initialized so that one node possesses the token. This is the sink node, and its LAST variable is initialized to 0. In all other nodes, LAST is set to point to the neighbor which is on the path to the node holding the token. Consider the example given in Figure 1. Node 5 holds the token initially. Let the directed edges indicate the direction in which the LAST variables are pointing. The initial conguration is shown in Figure 1a. Suppose node 5 wants to enter the critical section. Since node 5 holds the token, it can enter immediately. Now, suppose node 3 wants to enter the critical section. It sends a REQUEST message to node 4 and sets its LAST variable to 0 to become a new sink (refer to Figure 1b). Node 4 receives the request and sets its LAST variable to point to node 3 and forwards the RE- QUEST message to node 5, on behalf of node 3 (refer to Figure 1c). Node 5 receives the REQUEST message. Since node 5 is a sink node, it sets its NEXT variable to point to node 3 and sets its LAST variable to point to node 4 to become a non-sink. When node 5 leaves the critical section, it sends a PRIVILEGE message to the node indicated by its NEXT variable, i.e., node 3. Finally, node 3 receives the PRIVILEGE message and enters the critical section (refer to Figure 1d). - - - - Figure 1a. - - - Figure 1b. - - Figure 1c. - - Figure 1d. Figure 1. Simple example

3 The Algorithm The complete algorithm is shown in Figure 2. There are two procedures at each node: P1 and P2. Procedure P1 is executed when node I requests for entry into the critical section, and procedure P2 is executed when node I receives a request from some other node. const I = node identier var HOLDING LAST, NEXT : boolean; : integer; procedure P1; (* Enter critical section *) if (not HOLDING) then send REQUEST(I; I) to LAST; LAST := 0; wait until a PRIVILEGE message is received; HOLDING := false; critical section if (NEXT 6= 0) then send PRIVILEGE message to NEXT; NEXT := 0; else HOLDING := true; procedure P2; (* Handle REQUEST(X; Y ) msg *) if (LAST = 0) then if HOLDING then send PRIVILEGE message to Y ; HOLDING := false; else NEXT := Y ; else send REQUEST(I; Y ) to LAST; LAST := X; Figure 2. Algorithm In the algorithm, we assume that the REQUEST message is of form REQUEST(X; Y ) where X denotes the adjacent node from which the request came and Y denotes the node where the request originated. Each node executes procedures P1 and P2 in local mutual exclusion. The only exception is that a node does not have to execute in mutual exclusion while waiting for a PRIVILEGE message to arrive or while in the critical section. The initialization procedure is shown in Figure 3. procedure INIT; (* Initialize nodes *) if (holding the token) then HOLDING := true; LAST := 0; (* Node I is the sink *) NEXT := 0; send INITIALIZE(I) message to all neighboring nodes; else wait for INITIALIZE(J) message to arrive from node J; HOLDING := false; LAST := J; NEXT := 0; send INITIALIZE(I) message to all neighboring nodes, except node J; end Figure 3. Initialization procedure 4 Properties of the Algorithm In this section we sketch the proofs of correctness with respect to guaranteed mutual exclusion, deadlock freedom, and starvation freedom. Complete proofs may be found in a KSU technical report [10]. 4.1 Mutual exclusion Theorem 1: The algorithm guarantees mutual exclusion. Proof: In any token based mutual exclusion algorithm, possession of the token gives a node the exclusive privilege to enter the critical section. Initially, there is exactly one node holding the token. A node holding the

token can pass the token to another node by sending a PRIVILEGE message and setting HOLDING to false. Thus, there can be at most one node holding the token. Since possession of the token is necessary for a node to enter the critical section, mutual exclusion is guaranteed. 2 4.2 Deadlock and starvation freedom We rst recall a few assumptions: 1. A node can have at most one outstanding request to enter the critical section at any given time. We do not allow multiple requests from a single node. 2. The initial logical structure is acyclic without considering the directions of the edges. Sending a PRIVILEGE message does not change the graph. Forwarding a REQUEST message simply results in edge reversal. Thus, the logical structure is always acyclic. 3. Initially, exactly one node possesses the token and in all other nodes, LAST is initialized to point to the neighboring node which is on the path to the node holding the token. We will give a sketch of the proof for deadlock and starvation freedom. Let LAST x denote the value of LAST at node x and NEXT x denote the value of NEXT at node x. Lemma 1: If LAST I = 0, for some node I, then either node I is holding the token and has not received a request from another node since receiving the token, or node I has requested the token on its own behalf and has not received a subsequent request for the token. Lemma 2: At any point in time, every node I is on a path, of length less than N, to a node J, such that LAST J = 0. Furthermore, a request from node I will reach a sink node after at most D messages are forwarded on behalf of node I, where D is the diameter of the logical structure. Theorem 2: The algorithm is deadlock and starvation free. Proof: By Lemma 1, the only time a node J sets LAST J = 0 is when it is initially holding the token or has requested the token, but has not received a subsequent request for the token. In either case, a node will save at most one subsequent request for the token by setting its NEXT variable to point to the node originating the request. If more requests are received, they will simply be forwarded. By Lemma 2, every request from node I will reach a sink node (say node J) after at most D messages are forwarded on behalf of node I. Once the request from node I has been enqueued in the implicit waiting queue (node J sets NEXT J = I), node I will only have to wait until every other node ahead of node I, in the implicit waiting queue, enters and leaves the critical section. Since each node can have at most one outstanding request, this means that node I will have to wait on at most N? 1 critical sections after it has been enqueued. Therefore, deadlock and starvation freedom are guaranteed. 2 5 Performance Analysis As in Raymond's algorithm, the performance of the algorithm depends on the topology of the logical structure. The worst topology, in terms of the number of messages required per critical section entry, is a straight line, as shown in Figure 1. Raymond claimed that the best topology is the radiating star topology [12]. However, the best topology is the star topology, with one node in the center and all other nodes as leaf nodes. In the following discussion, we dene the diameter D of the topology to be the length of the longest path. 5.1 Upper bound The upper bound is equal to (D +1) messages per critical section entry. This occurs when a requesting node and a sink node are at opposite ends of the longest path: D messages for the request to travel to the sink node and one message for the token to be sent back to the requesting node. Thus, in the straight line topology, the upper bound is N, where N is the number of nodes in the system. In the best topology, the upper bound is 3, since the diameter of the star topology is 2. Note that this is the same as the performance of a centralized mutual exclusion algorithm. Other algorithms have the following upper bounds: Suzuki and Kasami's algorithm : N Trehel and Naimi's algorithm : N (O(log(N)) on the average) Raymond's algorithm : 2 D

5.2 Average bound We analyze the average performance for the best topology. If the requesting node holds the token, no messages are required. If the token is being held by a leaf node, then on the average 3? 4=N messages per critical section entry are required. This is calculated as follows: the other (N? 2) leaf nodes require 3 messages and the center node requires 2 messages: one REQUEST message and one PRIVILEGE message. Thus, the average is ((N? 2) 3 + 1 2)=N = 3? 4=N. If the token is being held by the center node, then only 2? 2=N messages are required: ((N? 1) 2 + 1 0)=N = 2? 2=N). We assume that at any given time each node has an equal likelihood of holding the token. There are (N? 1) leaf nodes and one center node; therefore, on the average, ((N? 1) (3? 4=N) + 1 (2? 2=N))=N = 3? 5=N + 2=N 2 messages are required per critical section entry. In the centralized scheme, on the average, (3? 3=N) messages per critical section entry are required. We assume that a control node may request to enter the critical section. In which case, it requires no message. Thus, (3? 3=N) messages are required: ((N?1)3+10)=N. Both methods approach 3 messages per critical section entry as N approaches in- nity. Under heavy demand, the performance is about the same, i.e., at most three messages per critical section entry. 5.3 Storage overhead Each node only needs to maintain three simple variables. A REQUEST message carries two integer variables, and a PRIVILEGE message needs no data structure. This is the same as Trehel and Naimi's algorithm and signicantly less overhead compared with other distributed mutual exclusion algorithms which maintain an array structure or a waiting queue of requesting nodes, either in every node or in the token. 5.4 Synchronization delay Synchronization delay is the maximum number of sequential messages required after a node, say node I, leaves the critical section and before another node, say node J, can enter the critical section. 5.4.1 Heavy-load Under heavy-load, we assume that the request from node J is to be processed next and that node J is blocked waiting for node I to leave the critical section. In this case, NEXT I = J and node J will be passed the token immediately after node I leaves the critical section. Since only one PRIVILEGE message needs to be passed, the heavy-load synchronization delay is one message. This is even better than a centralized scheme in which the heavy-load synchronization delay is two: one RELEASE and one GRANT message. Other token based algorithms have the following heavy-load synchronization delays: Suzuki and Kasami's algorithm : 1 Trehel and Naimi's algorithm : 1 Raymond's algorithm : D 5.4.2 Light-load Under light-load, we assume that the request from node J is to be processed next and that node J has not requested when node I leaves the critical section. In this case, NEXT I = 0 and node J will have to initiate a REQUEST message to obtain the token. Since at most D messages are required to forward the REQUEST and only one PRIVILEGE message is required to forward the token, the light-load synchronization delay is D + 1 messages. This is not as good as a centralized scheme in which the light-load synchronization delay is two: one REQUEST and one GRANT message. Other token based algorithms have the following light-load synchronization delays: Suzuki and Kasami's algorithm : 2 Trehel and Naimi's algorithm : O(log(N)) (on the average) Raymond's algorithm : 2 D. 6 Conclusion This paper presented a token based algorithm for distributed mutual exclusion which assumes a fully connected physical network and a dag structured logical network. The algorithm imposes very little storage overhead on each node and in each message. If the logical structure used is the star topology, the algorithm attains comparable performance to centralized schemes. On the average, about 3 messages are required per critical section entry in both schemes. However, our scheme reduces the heavy-load synchronization delay to one message compared with two messages in centralized schemes.

References [1] D. Agrawal and A. El Abbadi. An ecient solution to the distributed mutual exclusion problem. In Proc. 8th ACM Symposium on Principles of Distributed Computing, pp. 193{200, 1989. [2] J.M. Bernabeu-Auban and M. Ahamad. Applying a path-compression technique to obtain an ecient distributed mutual exclusion algorithm. Lecture Notes in Computer Science, 392:33{44, 1989. [3] O.S.F. Carvalho and G. Roucairol. On mutual exclusion in computer networks. Communications of the ACM, 26(2):146{147, 1983. [4] D. Ginat, D.D. Sleator, and R.E. Tarjan. A tight amortized bound for path reversal. Information Processing Letters, 31:3{5, 1989. [5] J.M. Helary, N. Plouzeau, and M. Raynal. A distributed algorithm for mutual exclusion in an arbitrary network. The Computer Journal, 31(4):289{ 295, 1988. [6] L. Lamport. Time, clocks, and the ordering of events in a distributed system. Communications of the ACM, 21(7):558{565, 1978. [7] M. Maekawa. A p N algorithm for mutual exclusion in decentralized systems. ACM Transactions on Computer Systems, 3(2):145{159, 1985. [8] A.J. Martin. Distributed mutual exclusion on a ring of processes. Science of Computer Programming, 5:265{276, 1985. [9] M. Naimi and M. Trehel. How to detect a failure and regenerate the token in the log(n) distributed algorithm for mutual exclusion. Lecture Notes in Computer Science, 312:155{166, 1987. [10] M.L. Neilsen and M. Mizuno. A dag-based algorithm for distributed mutual exclusion. Technical Report TR-CS-89-12, Kansas State University, Manhattan, Kansas, 1989. [11] S. Nishio, K. F. Li, and E. G. Manning. A time-out based resilient token transfer algorithm for mutual exclusion in computer networks. In IEEE 9th International Conference on Distributed Computing Systems, pp. 386{393, 1989. [12] K. Raymond. A tree-based algorithm for distributed mutual exclusion. ACM Transactions on Computer Systems, 7(1):61{77, 1989. [13] M. Raynal. Algorithms for Mutual Exclusion. MIT Press, 1986. [14] M. Raynal. Prime numbers as a tool to design distributed algorithms. Information Processing Letters, 33:53{58, 1989. [15] M. Raynal. A simple taxonomy for distributed mutual exclusion algorithms. Technical Report 560, INRIA, 78153 Le Chesnay Cedex, France, 1990. [16] G. Ricart and A. K. Agrawala. An optimal algorithm for mutual exclusion in computer networks. Communications of the ACM, 24(1):9{17, 1981. [17] G. Ricart and A. K. Agrawala. Authors' response to `On mutual exclusion in computer networks' by Carvalho and Roucairol. Communications of the ACM, 26(2):147{148, 1983. [18] B.A. Sanders. The information structure of distributed mutual exclusion algorithms. ACM Transactions on Computer Systems, 5(3):284{299, 1987. [19] M. Singhal. A dynamic information-structure mutual exclusion algorithm for distributed systems. In IEEE 9th International Conference on Distributed Computing Systems, pp. 70{78, 1989. [20] M. Singhal. A heuristically-aided algorithm for mutual exclusion in distributed systems. IEEE Transactions on Computers, 38(5):651{662, 1989. [21] I. Suzuki and T. Kasami. A distributed mutual exclusion algorithm. ACM Transactions on Computer Systems, 3(4):344{349, 1985. [22] M. Trehel and M. Naimi. A distributed algorithm for mutual exclusion based on data structures and fault tolerance. In Proc. IEEE 6th International Conference on Computers and Communications, pp. 35{39, 1987. [23] J. L. A. van de Snepscheut. Fair mutual exclusion on a graph of processes. Distributed Computing, 2:113{115, 1987.