R L R L. Fulllevel = 8 Last = 12

Size: px
Start display at page:

Download "R L R L. Fulllevel = 8 Last = 12"

Transcription

1 Lock Bypassing: An Ecient Algorithm For Concurrently Accessing Priority Heaps Yong Yan HAL Computer Systems, Inc. Campbell, California and Xiaodong Zhang Department of Computer Science P. O. Box 8795 College of William & Mary Williamsburg, Virginia The heap representation of priority queues is one of the most widely used data structures in the design of parallel algorithms. Eciently exploiting the parallelism of a priority heap has signicant inuence on the eciency of a wide range of applications and parallel algorithms. In this paper, we propose an aggressive priority heap operating algorithm, called the lock bypassing algorithm (LB) on shared memory systems. This algorithm minimizes interference of concurrent enqueue and dequeue operations on priority heaps while keeping the strict priority property: a dequeue always returns the minimum of a heap. The unique idea that distinguishes the LB algorithm from previous concurrent algorithms on priority heaps is the use of locking-on-demand and lock-bypassing techniques to minimize locking granularity and to maximize parallelism. The LB algorithm allows an enqueue operation to bypass the locks along its insertion path until it reaches a possible place where it can perform the insertion. Meanwhile a dequeue operation also makes its locking range and locking period as small as possible by carefully tuning its execution procedure. The LB algorithm is shown to be correct in terms of deadlock freedom and heap consistency. The performance of the LB algorithm was evaluated analytically and experimentally in comparison with previous algorithms. Analytical results show that the LB algorithm reduces by half the number of locks waited for in the worst case by previous algorithms. The experimental results show that the LB algorithm outperforms previously designed algorithms by up to a factor of 2 in hold time. General Terms: Aggressive Locking, Parallel Algorithm, Performance Evaluation, Priority Heap, Shared-memory System This work is supported in part by the National Science Foundation under grants CCR and CCR , by the Air Force Oce of Scientic Research under grant AFOSR and by the Oce of Naval Research under grant ONR Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for prot or direct commercial advantage and that copies show this notice on the rst page or initial screen of a display along with the full citation. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, to redistribute to lists, or to use any component of this work in other works, requires prior specic permission and/or a fee. Permissions may be requested from Publications Dept, ACM Inc., 1515 Broadway, New York, NY USA, fax +1 (212) , or permissions@acm.org.

2 2 Yong Yan and Xiaodong Zhang 1. INTRODUCTION The heap is an important data structure used as a priority queue in many application areas, such as in operating system scheduling, discrete event simulation, graph search and the branch-and-bound method on shared-memory multiprocessors [Biswas and Browne 1993; Cormen et al. 1990; Jones 1986; Olariu and Wen 1991; Ranade et al. 1994]. In these applications, each processor repeatedly performs in a \access-think" cycle. Every processor executes its current event or subproblem, then accesses the shared priority heap to insert new events or subproblems generated by the processor. The shared heap is used to provide the most important events or subproblems for a set of processors to solve in parallel. Consistency and eciency are two important characteristics to assess a concurrent operation on priority heaps. Consistency requires that operations should maintain the priority property of a heap, which is usually implemented with the help of some locking techniques. An ecient algorithm requires that a priority queue is operated in parallel by as many processors as possible, whose performance is mainly aected by the locking technique employed. Hence, the challenging issue is how to maximize the parallelism of accessing a priority heap while maintaining the consistency of the heap. For a concurrent priority heap, enqueue (or insertion) and dequeue (or deletion) are two critical operations that need to be parallelized. An enqueue operation is responsible for putting a new data item at an appropriate place in the priority heap. A dequeue operation always removes the minimum of a minimum heap or the maximum of a maximum heap, and reconstructs the heap. To allow for more processors to operate on a heap, it is necessary to reduce the interference among operations as much as possible. In the ordinary sequential heap algorithm, dequeue operations manipulate the heap from top to bottom, while enqueue operations manipulate it from bottom to top. The reverse of the manipulation directions between enqueues and dequeues prevent both enqueues and dequeues from being parallelized. To solve this problem, Biswas and Browne present a scheme in [Biswas and Browne 1987]. Their scheme decomposes an enqueue or a dequeue into a sequence of update steps at dierent levels of a heap, then schedules these steps to execute in parallel. However, this scheme causes substantial overhead, and even performs worse than the sequential access heap unless the heap size is very large. In [Rao and Kumar 1988], Rao and Kumar propose a top-to-bottom sequential insertion algorithm, then propose a concurrent algorithm for manipulating the priority heap. In their algorithm, both enqueues and dequeues start at the root of a heap (a heap is equivalent to a balanced binary tree), lock the current node, operate on the current node if necessary, and then select the next node to trace down the heap tree until the operations nish at an appropriate place to make the heap consistent. Although Rao and Kumar's algorithm, termed the RK algorithm hereafter, it is brief only needs to lock one node for an enqueue operation and three nodes for a dequeue operation, the lock granularity actually is the whole subtree rooted at the topmost node locked by an operation. The lock granularity may be too large to obtain good performance in parallel execution. Only those operations on dierent branches of the heap tree can proceed in parallel. In order to keep enqueues operating on

3 Lock Bypassing: An Ecient Algorithm For Concurrently Accessing Priority Heaps 3 dierent subtrees of a heap tree, Ayani [Ayani 1991] proposes an algorithm to do this based on left and right directing rule. The proposed algorithm is called the LR-algorithm. But the LR-algorithm may not improve the RK algorithm much in practice, because dequeue operations still proceed randomly, depending on the current heap. The random order of the dequeue operations nullies the benet of separating parallel enqueue operations. In this paper, rather than separate parallel operations on a heap, we propose more aggressive techniques to reduce locking granularity, hence reducing the interference among heap operations. Instead of blindly locking the current node, an enqueue operation exams the current node rst, and then locks it when it is necessary. In this way, enqueue operations can bypass other locked nodes and go directly to the possible place where new data can be inserted. For a dequeue operation, it does not put the key gotten from one bottom node into the heap until the dequeue operation visits the last node on its heap tree walking path. The dequeue operation uses duplicated keys to eliminate unnecessary blocking on enqueue operations. The correctness of our LB algorithm is proved in terms of heap consistency and deadlock freedom. The proposed new algorithm, termed the LB (lock bypassing) algorithm, was, analytically and experimentally, evaluated in comparison with the LR algorithm and the RK algorithm. Analytical results show that the LB algorithm can reduce by half the numbers of locks waited for, in the worst case, by a hold operation. A hold operation consists of a pair of dequeue and enqueue operations. Experiments evaluated the three algorithms with respect to dierent thinking delay and heap data with dierent distributions. The experimental results showed that the average hold operation time of the LB algorithm increases signicantly slower than that of the RK algorithm and that of the LR algorithm as the number of processors increases. When the number of processors increases up to 16, the LB algorithm outperforms the RK algorithm by 50% and the LR algorithm by 40% in hold operation time. Section 2 reviews the basic characteristics of a priority heap and its sequential dequeue and enqueue operations. The RK algorithm and the LR-algorithm are reviewed in Section 2. We propose a more aggressive locking-based algorithm in Section 3. Its correctness is proven in Section 3. In Section 4, the performance of the LB algorithm is studied analytically. Section 5 experimentally evaluates the LB algorithm, the LR algorithm, and the RK algorithm on a cache coherent shared-memory system. Conclusions are drawn in Section PRELIMINARIES 2.1 Priority Heap A priority heap [Cormen et al. 1990] is a complete binary tree with the priority property that the key value (hereafter called as the key or node key) at any node is not larger than (for a minimum heap) or not smaller than (for a maximum heap) the keys of its child nodes (if they exist). This is dened as the consistency property of a heap. In this paper we focus on the minimum heap (the ideas can also be used for other types of heaps). We maintain a strict priority property, which requires that a dequeue operation always return the current minimal data item in the heap. An array A[0 : n] is used to implement a heap. For a heap with n elements

4 4 Yong Yan and Xiaodong Zhang L 1 10 R L 2 40 R L 3 20 R L R L R L Fulllevel = 8 Last = 12 A minimum heap with 12 keys. The upper half of each circle represents the node Fig. 1. number, and the lower half represents the key value. The variable Fulllevel records the index of leftmost keys at the bottom level. The variable Last records the index of the last key in the heap. as shown in Figure 1, an array of size n is used to hold the heap, where node 1 occupies location 1 and node i occupies location i. The left child of node i is found at location 2 i, and the right child of node i is found at location 2 i + 1. The parent of node i is bi=2c. A[0] is initialized to be MAXINT, the maximal integer value in a given machine, which will be used to simplify our LB algorithm. In order to focus on evaluating concurrent dequeue and enqueue operations, we initially allocate a suciently large array to hold our heap and we initialize empty locations with MAXINT. 2.2 Sequential Enqueue and Dequeue Operations Enqueue and dequeue are two important operations supported for a heap. The enqueue operation inserts a new key in the heap and the dequeue removes the smallest key in the heap and returns it. The heap should maintain the priority property after an enqueue or a dequeue operation. A dequeue operation rst removes and returns the smallest key at the root node of the heap. It moves the last key pointed to by Last onto the root and compares it with the smallest child of the deleted root node. If the smallest child holds a key smaller than the current root, the new key of the root is exchanged with the smallest key. This procedure repeatedly goes down along the heap tree until reaching one descendant whose key is smaller than the keys of its children. In contrast, a traditional enqueue operation works from the bottom to the top on a heap. To insert a new key into the heap, an enqueue operation initially adds the new key to the rst empty node, i.e., node Last+1. It works up the tree exchanging with the parent node if the parent node has a larger key until the new key is placed at a node whose parent has a smaller key. Because a traditional sequential enqueue operation works in the opposite direction as a dequeue operation, both operations tend to deadlock in parallel execution. An alternative enqueue that can work from

5 Lock Bypassing: An Ecient Algorithm For Concurrently Accessing Priority Heaps 5 PUBLIC struct node f int key; g Heap[LEN]; /* LEN is large enough to hold a heap. */ int Fulllevel: /* index of the leftmost node on the bottom level. */ int Last; /* the index of the last node. */ Enqueue(nkey) f int i, j, k; Last = Last + 1; if (Last Fullevel * 2) Fulllevel = Last; j = Fulllevel/2; /* used to determine insertion path */ i = Last - Fulllevel; /* the displacement of target */ k = 1; /* current position in the insertion path */ while (j 6= 0)f if (Heap[k].key > nkey) exchange(nkey, Heap[k].key); if (i j) f k = rson(k); i = i - j; g else k = lson(k); j = j/2; g Heap[k].key = nkey; g Sequential top-to-bottom enqueue algorithm: rson(k) and lson(k) represent the Fig. 2. right child and left child of node k: exchange(key1, key2) exchanges key1 with key2. top to bottom has been proposed by Rao and Kumar [Rao and Kumar 1988]. This algorithm adds a new key to the rst empty node, termed the target node, then compares each ancestor node downwards on the insertion path from the root to the target node and exchanges the key at one ancestor node with the target node if the key at the target node is smaller than the key at the ancestor node. In Figure 1, the variable Fulllevel is used to determine the insertion path. This enqueue operation is helpful for implementing concurrent heap accesses. Hereafter, the enqueue operation refers to the top-to-bottom enqueue operation. The algorithm for the top-to-bottom enqueue operation is listed in Figure The RK Algorithm and the LR-algorithm When processors access a shared heap in parallel, a lock should be used to maintain the consistency of the heap. The most conservative locking strategy is to lock the whole heap as a single data structure. This strategy just implements a serial-access scheme, resulting in the worst possible performance in parallel execution. Hence, reducing the locking granularity is the central task in parallelizing enqueue and dequeue operations. In the RK algorithm, a window locking strategy is employed. A window is a portion of the heap. For an enqueue operation, it rst locks the root of the heap, checking whether an exchange operation should be done between the new key and

6 6 Yong Yan and Xiaodong Zhang the locked node. Then, it locks the next node along the insertion path from the root to the target, releases the current locked node, and makes a comparison between the two keys. This procedure will be repeated for each node on the insertion path. For a dequeue operation, it locks the root of the heap, removes the key at the root as a return key, moves the last node onto the root, and locks the two children of the root. If the root has a larger key than the minimum child (that has the smallest key), the key at the root is exchanged with the key at the minimum child and the locks on the root and the maximum child are released. The procedure is repeated from the minimum child until a descendant that has a larger key than its children nodes is met. The window size is one node for an enqueue operation and three nodes for a dequeue operation. Because no operation can bypass a locked node, the locking granularity of a lock is actually the whole subtree rooted at the locked node. For example, when an operation has locked the root of the heap, all the other incoming operations must wait at the root. When multiple enqueues operate on dierent parts of the heap, they must traverse the overlapping part of their insertion path in the order of their visits to the root. Hence, two operations can operate on the heap using maximal parallelism if and only if both operations have a smaller common part between their operating paths. In addition, the RK algorithm uses a state variable to resolve the interaction between an enqueue and a dequeue. For example, if the most recent enqueue operation is still in progress, a dequeue operation must wait for the last leaf node to have a non-transient key because what key will be nally inserted into a bottom node by an enqueue can only be determined when the enqueue opertion nishes its insertion procedure. The LR-algorithm [Ayani 1991] is motivated by the possibility of directing enqueue operations to operate along dierent insertion paths as much as possible. When an enqueue operation is working on the left subtree of a heap the next enqueue operation is directed to work on the right subtree of the heap. The LRalgorithm lets multiple enqueue operations work on dierent parts of the heap as early as possible. For an enqueue operation, if the bottom level has q nodes, the new key should be added to the empty node F ulllevel + b 1 b 2 b m (here b m b m?1 b 1 is the binary representation of q? 1 and m = log(f ulllevel) is the depth of the bottom level and F ulllevel is the index of the leftmost node at the bottom level.). The LR-algorithm uses a more eective approach to exploit parallelism in insertion operations. Because the working path of a dequeue operation depends on the heap, which cannot be pre-determined, the same dequeue algorithm as the algorithm is used in the LR-algorithm. However, in practice, enqueue operations interleave with dequeue operations in a heap. Because the working path of a dequeue operation depends on the heap, which cannot be pre-determined, only ordering enqueue operations can fail to guarantee a signicant performance improvement over the RK algorithm. Our experimental results, reported in a later section, show that the LR-algorithm and the RK algorithm have no major performance dierence. 3. LOCK BYPASSING ALGORITHM (LB ALGORITHM) In this section the design motivation and idea of our lock bypassing algorithm are described. Then the actual algorithm is presented.

7 Lock Bypassing: An Ecient Algorithm For Concurrently Accessing Priority Heaps 7 dequeue1 a L i dequeue b c L i+1 d e L i+2 (a) Fig. 3. (a). A dequeue follows another dequeue. (b) A dequeue follows an enqueue. 3.1 Motivation and algorithmic idea In the design of a concurrent algorithm for accessing a priority heap, each processor actually executes traditional sequential enqueue and dequeue algorithms. The major work in designing a concurrent algorithm is to use lock primitives to coordinate the execution of multiple processors so that heap consistency can be guaranteed while high performance is being pursued. We let each processor execute the top-tobottom enqueue and dequeue algorithms as described in the previous section. Here, we propose an aggressive way to use locks so that signicantly more parallelism can be exploited while consistency is maintained. The priority heap has a balanced tree structure. Its strict priority property requires that a dequeue operation always visits the root of a heap rst to grab the minimal data in the heap. Although enqueue operations can be inverted to walk on a heap tree downwards as in the RK algorithm, the interference among parallel enqueues and dequeues are still serious. Locks are the necessary primitives used to maintain the consistency of a heap. Intuitively, when a dequeue or enqueue operation operates on a node, it is necessary for the operation to lock related nodes to guarantee heap consistency. However, it is unnecessary for a locked node to block incoming heap operations. If two operations do not need to operate on the same part of the heap they should not block each other. For example, in Figure 1, we assume that some operation has locked the root when an enqueue operation with new key of 40 is arriving. This new key is added as the right child, as node 13, of node 6, and will not be exchanged with any node along its insertion path from the root to the target. So the enqueue operation should bypass the lock at the root. An enqueue operation stops at a node if and only if it nds that the new key is smaller than the key of the node. Our LB algorithm is mainly motivated by this lock bypassing idea. Bypassing locks in a heap really provides a good opportunity to exploit parallelism. Meanwhile, one must be careful with lock bypassing in order to maintain the

8 8 Yong Yan and Xiaodong Zhang consistency of a heap and to avoid deadlock. For the two heap operations, dequeue and enqueue, there are four possibilities for lock bypassing: an enqueue bypasses an enqueue, an enqueue bypasses a dequeue, a dequeue bypasses an enqueue, and a dequeue bypasses a dequeue. Here we further analyze when the lock bypassing can be safely used. First, because the strict priority property requires that a dequeue always returns the root of a heap, a dequeue operation actually cannot bypass any lock on the root. In addition, a top-to-bottom dequeue operation always rst removes the root as the return key, then moves the key of the last node into the root node and sinks the key down the heap tree to a node that has two child nodes with larger keys. So, when two dequeue operations have the same sinking path, as in Figure 3(a) where dequeue1 wants to sink key a down and dequeue2 is sinking key b down along its right child, the late arriving dequeue operation, such as dequeue1, cannot decide which child's key should be chosen as the new key of node a before the dequeue2 operation has decided what is the new key of node b. It is a necessary condition for consistency to preserve the incoming order of dequeue operations on common paths. So, a dequeue operation cannot bypass another dequeue operation. Secondly, if a dequeue operation meets an enqueue operation along its sinking path, as illustrated in Figure 3(b), the dequeue cannot decide which way to go before the enqueue nishes its comparison between the keys of node b and node e. So the dequeue cannot bypass the enqueue. The reason that a dequeue operation is unable to bypass other operations is that the sinking path of a dequeue operation is dynamically decided, which is aected by the results of other concurrent operations on its sinking path. Bypassing can be achieved from enqueue operations. The walking path of an enqueue operation along a heap tree can be predetermined, it is from the root to the next available empty node on the lowest level. What an enqueue operation does is to insert a new key into the empty node rst, then exchanges it downwards with some nodes along its walking path if the empty node holds a smaller key than that of the compared node. The necessary and sucient condition for an enqueue operation to stop sinking and to do an exchange is when it meets a node holding a smaller key than the new key to be inserted. In the third case that an enqueue operation follows another enqueue operations, as described in Figure 4(a) where node b is assumed to be locked by enqueue2 and enqueue1 has walking path a! b! e, enqueue1 can bypass node b if it nds that the new key to be inserted is larger than the key in node b. This bypass is safe even if enqueue2 will exchange its new key with node b. The comparing relation between the new key of the enqueue2 and the key in node b only has the following two results: enqueue2's new key is smaller than node b's key: Because enqueue1's key is larger than node b's key, it is also larger than enqueue2's new key. So, node b is certainly not the place for enqueue1's key to be inserted. enqueue2's new key is not smaller than node b's key: Node b's key will be changed by enqueue2. Node b is still not the place for enqueue1's key to be inserted. In addition, if enqueue1 nds it has a smaller key than node b it must stop here to check whether enqueue2 will insert an even smaller key into node b. The last case we consider is an enqueue operation following a dequeue operation.

9 Lock Bypassing: An Ecient Algorithm For Concurrently Accessing Priority Heaps 9 enqueue1 a L i enqueue2 b c L i+1 d e L i+2 (a) Fig. 4. (a). A enqueue follows another enqueue. (b) A enqueue follows an dequeue. This is illustrated in Figure 4(b) where the dequeue operation locks node b and the enqueue operation has the walking path a! b! e. If the enqueue nds it has a larger key than node b, it can bypass b and go to node e. When the enqueue still has a larger key than node e, it can conrm that its bypassing node b is safe because the dequeue operation always moves the minimal key in its children (d and e) into node b, which is certainly smaller than the enqueue's key. However, when the enqueue nds that its key is smaller than node e's key, it stops at node e to do an exchange. After the exchange, the enqueue operation must make another comparison back with node b's key, because when node b is the root where it holds the minimum of a heap, the dequeue operation may nish moving the minimal key in its children into b while the enqueue goes to compare with node e. If, in this case, the enqueue holds a key smaller than all other existing keys except in the root, the enqueue should put its key as the new root in node b. Because we allow both the dequeue and the enqueue to proceed in parallel on the overlapped part along their walking paths, the enqueue will put its key in node e, not in root b, so the backing check is necessary for consistency. The above descriptive analysis has determined when it is illegal to bypass a lock. In the design of our lock bypassing algorithm, the emphasis is on how to introduce locks to preserve the execution order of a dequeue operation from existing operations in a heap and how to allow an enqueue operation to bypass the locks on its walking path in a heap. Exploiting as much parallelism as possible, with the guarantee of heap consistency, is fundamental principle in the use of locks. 3.2 Lock bypassing algorithm Besides using a lock bypassing technique, our LB algorithm also incorporates some advantages of previous work described in [Ayani 1991] and [Rao and Kumar 1988]. The enqueue algorithm of our LB algorithm is a LR top-to-bottom enqueue algorithm with the implementation of a lock bypassing technique. The dequeue algorithm of the LB algorithm follows the traditional top-to-bottom dequeue algo-

10 10 Yong Yan and Xiaodong Zhang Last (a) Fig. 5. Insertion order of the top-to-bottom enqueue algorithm (Figure (a)) and LR top-tobottom enqueue algorithm (Figure (b)). In Figures (a) and (b), each node is represented as a cycle with two labels. The upper half is the node number (or node position), and the lower label represents the enqueue ordering number. rithm with the incorporation of locks to guarantee the execution order of dequeues relative to other heap operations. The dierence between a LR top-to-bottom enqueue algorithm and the top-tobottom enqueue algorithm shown in Figure 2 is shown in Figure 5(a) and (b). The insertion order in the top-to-bottom enqueue algorithm is the same as the node order, e.g., the next insertion node in a heap is the node indexed by Last + 1. (Last indexes the last node in the heap). However, the insertion order of the LR top-to-bottom enqueue algorithm is dierent from the node order for the purpose of separating the insertion paths of two consecutive enqueue operations as early as possible. For example in Figure 5(b) where the last enqueued data is placed in node 4, the next enqueue operation should put its data item in node 6, not in node 5 so that the last two enqueue operations only have the common root in their enqueue paths. The calculation of enqueue node positions is described in Section 2.3. In order to calculate the enqueue position, three global variables are introduced: F ulllevel, Last and level. Variable Last gives the total number of nodes in a heap. level records the total levels of a heap, which equals blog(last)c. Variable F ulllevel indexes the leftmost node on the bottom level. The 3 global variables, together, control a heap. Because enqueue or dequeue operations initially need to update a heap based on these 3 variables, a lock, named Lastlock, is used to enforce mutually exclusive access to these variables. In addition, a lock is introduced for each heap node so that data exchanges are made during the heap tree walking procedures of enqueue and dequeue operations to maintain consistency of the heap. There are two variables associated with each heap node i: Heap[i]:state and Heap[i]:sstate, which are used to coordinate enqueue and dequeue operations. The state variable Heap[i]:state has one of the following values, representing the dierent states of the node key:

11 Lock Bypassing: An Ecient Algorithm For Concurrently Accessing Priority Heaps 11 PRESENT: A key exists at the node; PENDING: An enqueue operation, which will ultimately insert a key at the node, is in progress; ABSENT: No key is present at the node. A heap node initially is in the ABSENT state (an empty node). When an enqueue operation is trying to insert a key in an empty node, called the target node, it constantly compares its new key with that of the nodes along its heap walking path and does an exchange if necessary. Before the enqueue operation reaches the target node, the key of the target is not determined. To prevent a dequeue operation from trying to use the key of the target node during this time, the state variable of the target is set to PENDING to inform the dequeue that the key in the target is temporarily not available so that the dequeue can delay itself. When an enqueue inserts a stable new key into the target node, the state variable of the target is set to PRESENT so that a dequeue can safely use it. Although the state variable in a node can inform a dequeue when a key in the node is available, a dequeue should wait for the state variable to change from PENDING to PRESENT. This will cause signicant idle time for a dequeue operation. When a dequeue operation gets the minimal key in the root of a heap, it rearranges the heap by moving the key in the last node into the root and then sinks the key along the appropriate heap tree branch. Because a dequeue operation does not require a specic key to move into the root, it does not need to wait for the last node to have a stable key. In other words, it does not need to wait for the change of the state of the last node from PENDING to PRESENT. Hence, variable sstate in a node is used for a dequeue operation to inform an enqueue that it wants the key to be inserted by the enqueue operation. The sstate variable gets one of the following two values: WANTED: A dequeue operation is waiting for the key; FREE: No dequeue operation is waiting for the key. When a dequeue operation is waiting for the key in a node that is going to be inserted by an enqueue, it sets the sstate variable to WANTED. The enqueue operation always checks the sstate variable of its target. When it sees that the sstate variable in the target has value of WANTED, it stops its enqueue procedure to insert its key into the target so that the dequeue operation can get this key as quickly as possible. The above data structure setting allow us to present our parallel enqueue and dequeue algorithms. The algorithms are given in Figure 6 and Figure 7 where rson(k) = 2k + 1 gives the right child node of node k, lson(k) = 2k gives the left child node of node k, MAX(i) and MIN(i) calculate respectively the indexes of the larger child node and smaller of node i. mutex lock and mutex unlock are lock and unlock primitives. mutex trylock is a nonblocking lock primitive (which is provided in most commercial thread libraries in slightly dierent formats), which tries to get the lock. If it fails, a non-zero error is returned; otherwise, zero is returned for success. In the enqueue algorithm, an enqueue operation rst competes for the lock Lastlock, trying to get the right to update the heap. When it gets the lock, it

12 12 Yong Yan and Xiaodong Zhang PUBLIC struct heapnode f int key, state, sstate; gheap[len]; /* Dene a heap */ int Fulllevel, Last, Level = -1; mutex t Mutex[MAX HEAP SIZE], Locklast; void Parallel enqueue(oat nkey) f int target, j, k, oset, i = 0; 1 mutex lock(&locklast); 2 if ((++Last) >= (Fulllevel<<1)) 3 f Fulllevel = Last; Level++;g 4 j = Fulllevel/2; oset = Last - Fulllevel; 5 for (k=0; k<level; k++)f /* compute position of last */ 6 i = (i<<1)j(i&0x0001); /* element */ 7 oset = oset>>1; 8 g 9 target = Fulllevel + i; 10 Heap[target].key = nkey; 11 Heap[target].state = PENDING; Heap[target].sstate = FREE ; 12 mutex unlock(&locklast); 13 k = 1; 14 while (j!= 0)f 15 if (Heap[target].sstate == WANTED) break; 16 if (nkey < Heap[k].key)f 17 while ((t = mutex trylock(mutex+k))!= 0&&Heap[target].sstate!= WANTED); 18 if (Heap[target].sstate == WANTED)f 19 if (t == 0) mutex unlock(mutex+k); 20 break; 21 g 22 if (nkey < Heap[k].key) 23 if (nkey >= Heap[k/2].key jj k == 1)f 24 exchan(&nkey,&heap[k].key); 25 mutex unlock(mutex+k); 26 gelsef mutex unlock(mutex+k); 27 k /= 2; j *= 2; 28 continue; 29 g 30 else mutex unlock(mutex+k); 31 g 32 if (i >= j)f 33 k = rson(k); i -= j; 34 gelse k = lson(k); 35 j /= 2; 36 g 37 Heap[target].key = nkey; Heap[target].state = PRESENT; g Fig. 6.

13 Lock Bypassing: An Ecient Algorithm For Concurrently Accessing Priority Heaps 13 PUBLIC struct heapnode f int key, state, sstate; gheap[len]; /* Dene a heap */ int Fulllevel, Last, Level = -1; mutex t Mutex[MAX HEAP SIZE], Locklast; int Parallel dequeue() f oat least, newkey; int i, j, k, ikk, oset = 0; 1 mutex lock(&locklast); 2 if ( last == 0)f /* empty heap * / 3 mutex unlock(&locklast); return(-1); 4 g 5 j = Last - Fulllevel; 6 for (m=0; m<level; m++)f 7 oset = (oset<<1)j(j&0x0001); 8 j = j>>1; 9 g 10 j = oset + Fulllevel; /* Find the last inserted node */ 11 if (??Last < Fulllevel)f 12 Fulllevel = Fulllevel<<1; 13 Level??; 14 g Heap[j].sstate = WANTED; while (j!= 1 && Heap[j].state == PENDING); 17 least = Heap[1].key; 18 newkey = Heap[j].key Heap[j].sstate = FREE; Heap[j].key = MAXINT; Heap[j].state = ABSENT; 21 if (j == 1)f 22 mutex unlock(&locklast); 23 return(least); 24 g 25 mutex unlock(&locklast); 26 i = 1; 27 mutex lock(mutex+i); 28 mutex lock(mutex+lson(i)); mutex lock(mutex+rson(i)); 29 k = MIN(i); ikk = MAX(i); 30 while (newkey > Heap[k].key)f 31 exchan(&heap[i].key, &Heap[k].key); 32 mutex unlock(mutex+ikk); mutex unlock(mutex+i); 33 i = k; 34 mutex lock(mutex+lson(i)); mutex lock(mutex+rson(i)); 35 k = MIN(i); ikk = MAX(i); 36 g 37 exchan(&newkey, &Heap[i].key); 38 mutex unlock(mutex+i); mutex unlock(mutex+lson(i)); mutex unlock(mutex+rson(i)); 39 return(least); g Fig. 7.

14 14 Yong Yan and Xiaodong Zhang calculates the target node where the new is to be inserted in statements 2 to 9. It sets the state variable of the target node to PENDING to inform incoming dequeue operations, and sets the sstate variable of the target node to FREE. Then it releases the Lastlock lock. Starting from statement 14, instead of blindly locking each node step-by-step on its insertion path, the enqueue operation checks whether a dequeue operation is waiting for its key. If so, it breaks to statement 37 to set its key into the target node directly and change the state variable of the target to PRESENT. If not, it compares its new key with the root. The procedure is repeated to bypass all locks along its insertion path from the root to the target node until the enqueue operation at statement 16 meets a node, denoted by j, with a larger key value than its new key. Then it rst enters a spinning test loop at statement 17 to check whether it can hold the lock on node j, or a dequeue operation is waiting for its key. If it nds a waiting dequeue operation, it releases its lock if it has gotten it and breaks to statement 37. Otherwise it rechecks whether its new key is smaller than that of the locked node j at statement 22, because it is possible for other enqueue operations holding larger keys to rst lock node j and change the key in node j during the period from its rst check at statement 16 to its holding the lock of node j at the spinning test loop. If not, it will continue to walk down the heap along its insertion path. Otherwise, it checks with the parent node before it can exchange its new key with node j because it is possible its parent has a key temporarily placed by a dequeue operation which does not satisfy heap consistency. If the enqueue operation nds that the parent node holds a smaller key than its new key, the enqueue operation releases the current node and goes back to the parent node to repeat its insertion procedure. Otherwise, it exchanges its new key with the key of node j, and then releases the locks and repeats this procedure down the insertion path. When the enqueue procedure reaches the target node at the bottom level, it changes the state of the target node to PRESENT so that the waiting dequeue operation can proceed. From the enqueue algorithm, the following property is given. Property 1. An enqueue operation can change the key in a heap node if and only if it holds the Lastlock lock (from statements 1 to 12) or the lock on the heap node (at statement 24), or the heap node is the target node of the enqueue operation with PENDING state (at statement 37). In the dequeue algorithm, a dequeue operation also rst competes for the lock Lastlock. When it gets the lock, it calculates the last heap node and adjusts the heap control parameters in statements 2 to 14. It sets the sstate variable of the last heap node to WANTED, then enters a spinning test loop at statement 16 to check whether the last heap has a stable key. It gets out of the spinning loop when the state variable of the last heap node has value PRESENT. Then, it copies the minimal key in the root to the variable least and removes the key in the last heap node into variable newkey as its new key, so that its heap tree walking procedure gets close to an enqueue algorithm. The dequeue operation gets into its heap tree walking procedure at statement 27, which starts from the heap root. Along the heap tree walking path, a dequeue operation rst acquires the locks on the current node and its two children nodes. It compares its new key with the smallest key of its children nodes. If the dequeue operation has a larger new key, it walks down the

15 Lock Bypassing: An Ecient Algorithm For Concurrently Accessing Priority Heaps 15 heap tree along the child node with smallest key. This procedure is repeated until the dequeue operation reaches a node where its two child nodes have larger keys than that of the dequeue operation. Then the new key is nally inserted into the heap. From the dequeue algorithm, the following two properties can be derived. Property 2. A dequeue operation can remove the key in a heap node if and only if it holds the Lastlock lock (at statement 17 or 18). Property 3. A dequeue operation can change the key in a heap node if and only if it holds the locks on the node and its two child nodes (at statement 31 or 37). The major point that distinguishes our dequeue algorithm from the previous dequeue algorithm in [Rao and Kumar 1988] is that instead of directly removing the key in the last heap node into the root and sinking it down the heap tree, our dequeue algorithm removes the key in the last heap node in a temporary variable as its new key and compares the new key with the children of a current node. The new key of a dequeue operation occurs in the heap only when the dequeue operation nds an appropriate insertion node. This approach helps exploit parallelism in the lock bypassing procedure of enqueue operations. For example, in Figure 1 where a dequeue operation is going to return key 10 in the root and an enqueue operation with new key of 37 follows the dequeue operation, if the dequeue operation directly removes key 50 in the last heap node 12 into the root, the enqueue operation is always blocked by the dequeue operation when it walks down the tree along path node 1! node 3! node 6 because the enqueue operation always nds that the keys locked by the dequeue operation are larger than its key 37. However, if the key of the last node is not directly removed into another heap node temporarily, the enqueue can bypass any nodes locked by the dequeue operation provided that the new key 50 of the dequeue has not been inserted in the heap. This helps the enqueue operation to arrive at its insertion position, node 12, quickly. 3.3 Correctness analysis of LB algorithm In order to avoid the intricacy of formal proof methods, such as Petri Net and temporal logic, we show the correctness of our parallel dequeue and enqueue algorithm by using more intuitive arguments. In the heap tree walking procedure of the dequeue operation presented in the example of the last section, the current node locked by it has an invalid key. For example, when the dequeue operation locks the root, it copies out the key of the root as its return key. Although the root still has its original key 10, this key is actually a deleted key. Moreover, when the dequeue operation walks down to node 3 from the root, it copies the key of node 3 into the root. This time, the key in node 3 is a duplicate of the root key. When the dequeue inserts its new key into an appropriate node, there are no invalid keys existing in the heap. The duplicated keys and invalid key facilitate the exploitation of parallelism by delaying the insertion of the new key of a dequeue operation as late as possible since the new key coming from the bottom level usually has a large key. Invalid keys only occur in those heap nodes locked by the dequeue operation, which cannot be returned as a valid key by another dequeue operation (by Property 2). One concern that should

16 16 Yong Yan and Xiaodong Zhang be addressed is whether the existence of invalid keys would violate the consistency of a heap because the key to be inserted by a dequeue operation is not seen by any other bypassing enqueues until the key is inserted into a nal, appropriate heap node. Owing to this, it is possible for an enqueue operation holding a smaller key to have bypassed a dequeue operation holding a larger key. The following statements refer to the example above (see Figure 1). When the dequeue operation with new key of 50 nishes poping node 6's key into node 3 and releasing the locks on node 3 and node 7, if the enqueue operation with a key of 37 following the dequeue operation has bypassed node 6 because its key is larger than the invalid key 30 in node 6, the enqueue is going to insert its key into node 12. By Property 1 and Property 2, it is known that the dequeue operation cannot insert its new key 50 into node 6 concurrently with the insertion of the new key of the enqueue into node 12. If the dequeue nishes its insertion in node 6 before the enqueue does, the enqueue operation will nd the existence of a larger key in its parent by one-step back-checking so that it restarts its heap tree walking procedure from its parent node. Otherwise, the dequeue operation will be responsible for moving the new key 37 of node 12 into node 6, and putting its key 50 into node 12. This guarantees the consistency of the heap. The above argument shows that the bypassing of enqueue operations over dequeue operations is safe in our enqueue and dequeue algorithms. For the bypassing of enqueues over enqueues, it is impossible for an enqueue with smaller key to bypass a lock held by an enqueue with larger key because an enqueue stops at a heap node to acquire its lock if and only if the enqueue nds a larger key in the held node. In this case, the following enqueue operation with a smaller key also must stop at this point. So, the bypassing of enqueues over enqueues is safe. Moreover, properties 1, 2, and 3, show that the update on each heap node is atomic, i.e., only one operation can update a heap node at a time. From these points, the following conclusion can be derived. Conclusion 1. The consistency of a heap is guaranteed by our parallel dequeue and enqueue algorithms. Besides consistency, deadlock freedom is another requirement for the correctness of our concurrent enqueue and dequeue algorithms. For any real application, the number of events generated is nite. So, we can assume the total number of enqueue and dequeue operations to be nite. For the parallel enqueue algorithm given in Figure 6, when it grabs lock Lastlock at statement 1, it will release it in a nite time at statement 12. So enqueue operations do not block one another on the Lastlock lock. The following conclusion can be derived. Conclusion 2. An enqueue operation is indenitely blocked by the Lastlock lock only when the Lastlock lock is indenitely held by a dequeue operation. Among statements 13 to 37 in Figure 6, the spinning test loop at statement 17 is another possible place for an enqueue to be blocked. An enqueue operation spins indenitely at statement 17 only when no dequeue operation is waiting for its key and it cannot get its requested lock. Here, we assume an enqueue always can get out of the spinning loop. (Later, we will show that this is true by considering a

17 Lock Bypassing: An Ecient Algorithm For Concurrently Accessing Priority Heaps 17 dequeue operation.) If an enqueue gets out of the spinning loop by nding a waiting dequeue operation, it will jump directly to the end and nish its enqueue procedure. So, we only need to consider the situation that an enqueue operation gets out of the spinning loop by acquiring its requested lock and no waiting dequeue is found. From statements 22 to 30 in Figure 6, we can see that the enqueue operation nishes all its operations and releases its held lock in a nite time. Combining this with its execution behavior on the Lastlock lock, we can conclude the following. Conclusion 3. An enqueue operation never holds a lock indenitely. When an enqueue gets its requested lock and reaches statement 22, it may walk down the heap tree or walk back to its parent node. If an enqueue always walks down the heap, it nally reaches its target insertion node and nishes its insertion procedure because a heap has a nite number of heap nodes. When it walks back to its parent node, it must have found that its parent has held a larger key after the enqueue had bypassed it at statement 23. Assume the current node locked by the enqueue operation is k, its parent node is P arent(j) ( = bj=2c) and newkey is the new key in the enqueue operation. Walking back has the following condition: (newkey < heap[j]:key) ^ (newkey < heap[p arent(j)]:key): (1) This can happen only when a dequeue was going to change the key of the parent while the enqueue operation was bypassing the parent node because an enqueue only puts a smaller key in a node when it is going to change the key of the node. However, a dequeue operation never walks back to its parent in its heap tree walking procedure. So, an enqueue operation only does backward walks for a nite number of times because the total number of dequeue operations is nite. Finally, an enqueue operation will walk down the heap to reach its target insertion node to nish the enqueue procedure. Combining the above analysis, we can derive another important conclusion as follows. Conclusion 4. If an enqueue operation can acquire the Lastlock lock at statement 1 and locks at the spinning loop of statement 17 in a nite time, it will nish its enqueue procedure in a nite time. Now we analyze the dequeue algorithm (shown in Figure 7). A dequeue operation enters the global heap updating section by acquiring the Lastlock lock at statement 1. If it can get out of the spinning loop at statement 16 in a nite time, it releases the Lastlock lock in a nite time. A dequeue operation is waiting at the spinning loop when the key to be removed is being held by an enqueue operation which must be executing the enqueue algorithm among statements 14 to 37. By Conclusion 3, the enqueue operation being waited for by the dequeue operation can only be blocked by the dequeue operations that execute the dequeue algorithm in statements 26 to 39. However, from the dequeue algorithm, if a dequeue operation can acquire its requested locks among statements 26 to 39 in a nite time, it will release its locks in a nite time and walk down the heap tree. A dequeue operation only visits a heap node one time. In a heap tree structure, two dequeue operations never lock each other and, by Conclusion 4, an enqueue never blocks a dequeue. The enqueue operation waited for by the dequeue operation at the spinning loop will put a stable key into its target node to get the dequeue operation out of the spinning loop in a nite time. So, we have the following conclusion.

18 18 Yong Yan and Xiaodong Zhang Conclusion 5. A dequeue operation never holds a lock indenitely, and if it can get its requested locks in a nite time, it will nish its dequeue operation in a nite time. By Conclusions 3, 4, and 5, when the total number of enqueues and dequeues is nite, each enqueue or dequeue will nish in a nite time. This shows that our parallel enqueue and parallel dequeue algorithms are deadlock free. 4. ANALYTICAL PERFORMANCE ANALYSIS Expressing the performance of a parallel program precisely in mathematical form is dicult, and is sometimes impossible, because there are a set of complicated factors to be formalized. These range from parallel architectures and system software to applications. In this section, we approximately analyze the average number of locks that a hold operation (a pair of enqueue and dequeue operations) would wait for in the worst case during its execution for our LB algorithm, the LR algorithm, and the RK algorithm. The analytical results provide upper bounds on the performance of the algorithms. We use the following assumptions to restrict our analysis: (1) Heap node keys follow a uniform distribution. (2) All processors issue their hold operations simultaneously and all the operations in a heap walk down the heap tree synchronously, i.e., if some operations can walk down the heap tree to the next level from the current level, they will get to the next level simultaneously. This assumption sets the largest contention on the heap node to give the worst case performance. (3) The enqueue algorithm or the dequeue algorithm is simplied as a downward walking procedure along a heap tree. (4) For simplication, a heap is assumed to have a complete balanced tree (the results are approximately valid for an incomplete tree). Let N be the total number of nodes in the heap. Its height is log(n + 1), denoted by h. (in this paper, all the logs have base of 2.) In a heap, the level number of each node is labeled downwards from the root with level number of 1. The bottom level nodes have level number of h. In addition, we let P be the total number of processors in a shared-memory system. From the view point of an algorithmic abstraction, the dequeue operations in the LB algorithm, the LR algorithm, and the RK algorithm have the same procedure: removing a key from one bottom node and making comparisons with the nodes along its downward walking path from the root. A dequeue operation can walk down to next level from a current node only when its key is not smaller than child nodes of the current node. The key of a dequeue operation comes from one bottom node. So, its key is at least larger than the keys of the other h? 1 nodes on the path from the bottom node to the root. Combining this with assumption 1, the probability for the key of a dequeue operation to be larger than the key in any heap node is p b = N + h? 1 2 N N + log(n? 1)? 1 = : (2) 2 N

An Ecient Algorithm for Concurrent Priority Queue. Heaps. Galen C. Hunt, Maged M. Michael, Srinivasan Parthasarathy, Michael L.

An Ecient Algorithm for Concurrent Priority Queue. Heaps. Galen C. Hunt, Maged M. Michael, Srinivasan Parthasarathy, Michael L. An Ecient Algorithm for Concurrent Priority Queue Heaps Galen C. Hunt, Maged M. Michael, Srinivasan Parthasarathy, Michael L. Scott Department of Computer Science, University of Rochester, Rochester, NY

More information

12, 8, 16]) on shared-memory multiprocessors. In these algorithms each processor repeatedly performs an access-think cycle. Every processor executes i

12, 8, 16]) on shared-memory multiprocessors. In these algorithms each processor repeatedly performs an access-think cycle. Every processor executes i Concurrent Access of Priority Queues V. Nageshwara Rao and Vipin Kumar y Department of Computer Sciences, University of Texas at Austin, Austin, Texas 78712 Abstract The heap is an important data structure

More information

Lecture 13: AVL Trees and Binary Heaps

Lecture 13: AVL Trees and Binary Heaps Data Structures Brett Bernstein Lecture 13: AVL Trees and Binary Heaps Review Exercises 1. ( ) Interview question: Given an array show how to shue it randomly so that any possible reordering is equally

More information

Heap Order Property: Key stored at the parent is smaller or equal to the key stored at either child.

Heap Order Property: Key stored at the parent is smaller or equal to the key stored at either child. A Binary Heap is a data structure that is an array object that can be viewed as a nearly complete binary tree. The tree is completely filled on all levels except the lowest, which may only be partially

More information

Abstract Relaxed balancing has become a commonly used concept in the design of concurrent search tree algorithms. The idea of relaxed balancing is to

Abstract Relaxed balancing has become a commonly used concept in the design of concurrent search tree algorithms. The idea of relaxed balancing is to The Performance of Concurrent Red-Black Tree Algorithms Institut fur Informatik Report 5 Sabine Hanke Institut fur Informatik, Universitat Freiburg Am Flughafen 7, 79 Freiburg, Germany Email: hanke@informatik.uni-freiburg.de

More information

Priority Queues. 1 Introduction. 2 Naïve Implementations. CSci 335 Software Design and Analysis III Chapter 6 Priority Queues. Prof.

Priority Queues. 1 Introduction. 2 Naïve Implementations. CSci 335 Software Design and Analysis III Chapter 6 Priority Queues. Prof. Priority Queues 1 Introduction Many applications require a special type of queuing in which items are pushed onto the queue by order of arrival, but removed from the queue based on some other priority

More information

9/29/2016. Chapter 4 Trees. Introduction. Terminology. Terminology. Terminology. Terminology

9/29/2016. Chapter 4 Trees. Introduction. Terminology. Terminology. Terminology. Terminology Introduction Chapter 4 Trees for large input, even linear access time may be prohibitive we need data structures that exhibit average running times closer to O(log N) binary search tree 2 Terminology recursive

More information

Operations on Heap Tree The major operations required to be performed on a heap tree are Insertion, Deletion, and Merging.

Operations on Heap Tree The major operations required to be performed on a heap tree are Insertion, Deletion, and Merging. Priority Queue, Heap and Heap Sort In this time, we will study Priority queue, heap and heap sort. Heap is a data structure, which permits one to insert elements into a set and also to find the largest

More information

CSL373: Lecture 5 Deadlocks (no process runnable) + Scheduling (> 1 process runnable)

CSL373: Lecture 5 Deadlocks (no process runnable) + Scheduling (> 1 process runnable) CSL373: Lecture 5 Deadlocks (no process runnable) + Scheduling (> 1 process runnable) Past & Present Have looked at two constraints: Mutual exclusion constraint between two events is a requirement that

More information

CSCI2100B Data Structures Trees

CSCI2100B Data Structures Trees CSCI2100B Data Structures Trees Irwin King king@cse.cuhk.edu.hk http://www.cse.cuhk.edu.hk/~king Department of Computer Science & Engineering The Chinese University of Hong Kong Introduction General Tree

More information

CS301 - Data Structures Glossary By

CS301 - Data Structures Glossary By CS301 - Data Structures Glossary By Abstract Data Type : A set of data values and associated operations that are precisely specified independent of any particular implementation. Also known as ADT Algorithm

More information

Lecture 5. Treaps Find, insert, delete, split, and join in treaps Randomized search trees Randomized search tree time costs

Lecture 5. Treaps Find, insert, delete, split, and join in treaps Randomized search trees Randomized search tree time costs Lecture 5 Treaps Find, insert, delete, split, and join in treaps Randomized search trees Randomized search tree time costs Reading: Randomized Search Trees by Aragon & Seidel, Algorithmica 1996, http://sims.berkeley.edu/~aragon/pubs/rst96.pdf;

More information

Introduction. for large input, even access time may be prohibitive we need data structures that exhibit times closer to O(log N) binary search tree

Introduction. for large input, even access time may be prohibitive we need data structures that exhibit times closer to O(log N) binary search tree Chapter 4 Trees 2 Introduction for large input, even access time may be prohibitive we need data structures that exhibit running times closer to O(log N) binary search tree 3 Terminology recursive definition

More information

Egemen Tanin, Tahsin M. Kurc, Cevdet Aykanat, Bulent Ozguc. Abstract. Direct Volume Rendering (DVR) is a powerful technique for

Egemen Tanin, Tahsin M. Kurc, Cevdet Aykanat, Bulent Ozguc. Abstract. Direct Volume Rendering (DVR) is a powerful technique for Comparison of Two Image-Space Subdivision Algorithms for Direct Volume Rendering on Distributed-Memory Multicomputers Egemen Tanin, Tahsin M. Kurc, Cevdet Aykanat, Bulent Ozguc Dept. of Computer Eng. and

More information

CSE100. Advanced Data Structures. Lecture 8. (Based on Paul Kube course materials)

CSE100. Advanced Data Structures. Lecture 8. (Based on Paul Kube course materials) CSE100 Advanced Data Structures Lecture 8 (Based on Paul Kube course materials) CSE 100 Treaps Find, insert, delete, split, and join in treaps Randomized search trees Randomized search tree time costs

More information

Data Structure. IBPS SO (IT- Officer) Exam 2017

Data Structure. IBPS SO (IT- Officer) Exam 2017 Data Structure IBPS SO (IT- Officer) Exam 2017 Data Structure: In computer science, a data structure is a way of storing and organizing data in a computer s memory so that it can be used efficiently. Data

More information

DDS Dynamic Search Trees

DDS Dynamic Search Trees DDS Dynamic Search Trees 1 Data structures l A data structure models some abstract object. It implements a number of operations on this object, which usually can be classified into l creation and deletion

More information

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Priority Queues / Heaps Date: 9/27/17

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Priority Queues / Heaps Date: 9/27/17 01.433/33 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Priority Queues / Heaps Date: 9/2/1.1 Introduction In this lecture we ll talk about a useful abstraction, priority queues, which are

More information

Interprocess Communication By: Kaushik Vaghani

Interprocess Communication By: Kaushik Vaghani Interprocess Communication By: Kaushik Vaghani Background Race Condition: A situation where several processes access and manipulate the same data concurrently and the outcome of execution depends on the

More information

Data Structures and Algorithms

Data Structures and Algorithms Data Structures and Algorithms Spring 2017-2018 Outline 1 Priority Queues Outline Priority Queues 1 Priority Queues Jumping the Queue Priority Queues In normal queue, the mode of selection is first in,

More information

16 Greedy Algorithms

16 Greedy Algorithms 16 Greedy Algorithms Optimization algorithms typically go through a sequence of steps, with a set of choices at each For many optimization problems, using dynamic programming to determine the best choices

More information

Computational Optimization ISE 407. Lecture 16. Dr. Ted Ralphs

Computational Optimization ISE 407. Lecture 16. Dr. Ted Ralphs Computational Optimization ISE 407 Lecture 16 Dr. Ted Ralphs ISE 407 Lecture 16 1 References for Today s Lecture Required reading Sections 6.5-6.7 References CLRS Chapter 22 R. Sedgewick, Algorithms in

More information

15 418/618 Project Final Report Concurrent Lock free BST

15 418/618 Project Final Report Concurrent Lock free BST 15 418/618 Project Final Report Concurrent Lock free BST Names: Swapnil Pimpale, Romit Kudtarkar AndrewID: spimpale, rkudtark 1.0 SUMMARY We implemented two concurrent binary search trees (BSTs): a fine

More information

Midterm Exam #2 Solutions October 25, 2016 CS162 Operating Systems

Midterm Exam #2 Solutions October 25, 2016 CS162 Operating Systems University of California, Berkeley College of Engineering Computer Science Division EECS all 2016 Anthony D. Joseph Midterm Exam #2 Solutions October 25, 2016 CS162 Operating Systems Your Name: SID AND

More information

r (1,1) r (2,4) r (2,5) r(2,6) r (1,3) r (1,2)

r (1,1) r (2,4) r (2,5) r(2,6) r (1,3) r (1,2) Routing on Trees via Matchings? Alan Roberts 1, Antonis Symvonis 1 and Louxin Zhang 2 1 Department of Computer Science, University of Sydney, N.S.W. 2006, Australia 2 Department of Computer Science, University

More information

Programming II (CS300)

Programming II (CS300) 1 Programming II (CS300) Chapter 10: Search and Heaps MOUNA KACEM mouna@cs.wisc.edu Spring 2018 Search and Heaps 2 Linear Search Binary Search Introduction to trees Priority Queues Heaps Linear Search

More information

Outline. Definition. 2 Height-Balance. 3 Searches. 4 Rotations. 5 Insertion. 6 Deletions. 7 Reference. 1 Every node is either red or black.

Outline. Definition. 2 Height-Balance. 3 Searches. 4 Rotations. 5 Insertion. 6 Deletions. 7 Reference. 1 Every node is either red or black. Outline 1 Definition Computer Science 331 Red-Black rees Mike Jacobson Department of Computer Science University of Calgary Lectures #20-22 2 Height-Balance 3 Searches 4 Rotations 5 s: Main Case 6 Partial

More information

A taxonomy of race. D. P. Helmbold, C. E. McDowell. September 28, University of California, Santa Cruz. Santa Cruz, CA

A taxonomy of race. D. P. Helmbold, C. E. McDowell. September 28, University of California, Santa Cruz. Santa Cruz, CA A taxonomy of race conditions. D. P. Helmbold, C. E. McDowell UCSC-CRL-94-34 September 28, 1994 Board of Studies in Computer and Information Sciences University of California, Santa Cruz Santa Cruz, CA

More information

3. Priority Queues. ADT Stack : LIFO. ADT Queue : FIFO. ADT Priority Queue : pick the element with the lowest (or highest) priority.

3. Priority Queues. ADT Stack : LIFO. ADT Queue : FIFO. ADT Priority Queue : pick the element with the lowest (or highest) priority. 3. Priority Queues 3. Priority Queues ADT Stack : LIFO. ADT Queue : FIFO. ADT Priority Queue : pick the element with the lowest (or highest) priority. Malek Mouhoub, CS340 Winter 2007 1 3. Priority Queues

More information

Goal of Concurrency Control. Concurrency Control. Example. Solution 1. Solution 2. Solution 3

Goal of Concurrency Control. Concurrency Control. Example. Solution 1. Solution 2. Solution 3 Goal of Concurrency Control Concurrency Control Transactions should be executed so that it is as though they executed in some serial order Also called Isolation or Serializability Weaker variants also

More information

9/24/ Hash functions

9/24/ Hash functions 11.3 Hash functions A good hash function satis es (approximately) the assumption of SUH: each key is equally likely to hash to any of the slots, independently of the other keys We typically have no way

More information

II (Sorting and) Order Statistics

II (Sorting and) Order Statistics II (Sorting and) Order Statistics Heapsort Quicksort Sorting in Linear Time Medians and Order Statistics 8 Sorting in Linear Time The sorting algorithms introduced thus far are comparison sorts Any comparison

More information

Algorithm 23 works. Instead of a spanning tree, one can use routing.

Algorithm 23 works. Instead of a spanning tree, one can use routing. Chapter 5 Shared Objects 5.1 Introduction Assume that there is a common resource (e.g. a common variable or data structure), which different nodes in a network need to access from time to time. If the

More information

Advanced Algorithms. Class Notes for Thursday, September 18, 2014 Bernard Moret

Advanced Algorithms. Class Notes for Thursday, September 18, 2014 Bernard Moret Advanced Algorithms Class Notes for Thursday, September 18, 2014 Bernard Moret 1 Amortized Analysis (cont d) 1.1 Side note: regarding meldable heaps When we saw how to meld two leftist trees, we did not

More information

Abstract Relaxed balancing of search trees was introduced with the aim of speeding up the updates and allowing a high degree of concurrency. In a rela

Abstract Relaxed balancing of search trees was introduced with the aim of speeding up the updates and allowing a high degree of concurrency. In a rela Chromatic Search Trees Revisited Institut fur Informatik Report 9 Sabine Hanke Institut fur Informatik, Universitat Freiburg Am Flughafen 7, 79 Freiburg, Germany Email: hanke@informatik.uni-freiburg.de.

More information

Real-Time Scalability of Nested Spin Locks. Hiroaki Takada and Ken Sakamura. Faculty of Science, University of Tokyo

Real-Time Scalability of Nested Spin Locks. Hiroaki Takada and Ken Sakamura. Faculty of Science, University of Tokyo Real-Time Scalability of Nested Spin Locks Hiroaki Takada and Ken Sakamura Department of Information Science, Faculty of Science, University of Tokyo 7-3-1, Hongo, Bunkyo-ku, Tokyo 113, Japan Abstract

More information

2 Li Xiao, Xiaodong Zhang, and Stefan A. Kubricht isting restructured algorithms (e.g., [4]) mainly attempt to reduce capacity misses on direct-mapped

2 Li Xiao, Xiaodong Zhang, and Stefan A. Kubricht isting restructured algorithms (e.g., [4]) mainly attempt to reduce capacity misses on direct-mapped ACM Journal on Experimental Algorithmics, Vol. 5, 2 Improving Memory Performance of Sorting Algorithms Li Xiao, Xiaodong Zhang, Stefan A. Kubricht Department of Computer Science College of William and

More information

to automatically generate parallel code for many applications that periodically update shared data structures using commuting operations and/or manipu

to automatically generate parallel code for many applications that periodically update shared data structures using commuting operations and/or manipu Semantic Foundations of Commutativity Analysis Martin C. Rinard y and Pedro C. Diniz z Department of Computer Science University of California, Santa Barbara Santa Barbara, CA 93106 fmartin,pedrog@cs.ucsb.edu

More information

Priority Queues and Binary Heaps

Priority Queues and Binary Heaps Yufei Tao ITEE University of Queensland In this lecture, we will learn our first tree data structure called the binary heap which serves as an implementation of the priority queue. Priority Queue A priority

More information

1 Binary trees. 1 Binary search trees. 1 Traversal. 1 Insertion. 1 An empty structure is an empty tree.

1 Binary trees. 1 Binary search trees. 1 Traversal. 1 Insertion. 1 An empty structure is an empty tree. Unit 6: Binary Trees Part 1 Engineering 4892: Data Structures Faculty of Engineering & Applied Science Memorial University of Newfoundland July 11, 2011 1 Binary trees 1 Binary search trees Analysis of

More information

Optimum Alphabetic Binary Trees T. C. Hu and J. D. Morgenthaler Department of Computer Science and Engineering, School of Engineering, University of C

Optimum Alphabetic Binary Trees T. C. Hu and J. D. Morgenthaler Department of Computer Science and Engineering, School of Engineering, University of C Optimum Alphabetic Binary Trees T. C. Hu and J. D. Morgenthaler Department of Computer Science and Engineering, School of Engineering, University of California, San Diego CA 92093{0114, USA Abstract. We

More information

CS350: Data Structures B-Trees

CS350: Data Structures B-Trees B-Trees James Moscola Department of Engineering & Computer Science York College of Pennsylvania James Moscola Introduction All of the data structures that we ve looked at thus far have been memory-based

More information

Linearizable Iterators

Linearizable Iterators Linearizable Iterators Supervised by Maurice Herlihy Abstract Petrank et. al. [5] provide a construction of lock-free, linearizable iterators for lock-free linked lists. We consider the problem of extending

More information

! Tree: set of nodes and directed edges. ! Parent: source node of directed edge. ! Child: terminal node of directed edge

! Tree: set of nodes and directed edges. ! Parent: source node of directed edge. ! Child: terminal node of directed edge Trees (& Heaps) Week 12 Gaddis: 20 Weiss: 21.1-3 CS 5301 Spring 2015 Jill Seaman 1 Tree: non-recursive definition! Tree: set of nodes and directed edges - root: one node is distinguished as the root -

More information

CS350: Data Structures Heaps and Priority Queues

CS350: Data Structures Heaps and Priority Queues Heaps and Priority Queues James Moscola Department of Engineering & Computer Science York College of Pennsylvania James Moscola Priority Queue An abstract data type of a queue that associates a priority

More information

Thus, it is reasonable to compare binary search trees and binary heaps as is shown in Table 1.

Thus, it is reasonable to compare binary search trees and binary heaps as is shown in Table 1. 7.2 Binary Min-Heaps A heap is a tree-based structure, but it doesn t use the binary-search differentiation between the left and right sub-trees to create a linear ordering. Instead, a binary heap only

More information

Chapter 4: Trees. 4.2 For node B :

Chapter 4: Trees. 4.2 For node B : Chapter : Trees. (a) A. (b) G, H, I, L, M, and K.. For node B : (a) A. (b) D and E. (c) C. (d). (e).... There are N nodes. Each node has two pointers, so there are N pointers. Each node but the root has

More information

arxiv: v3 [cs.ds] 18 Apr 2011

arxiv: v3 [cs.ds] 18 Apr 2011 A tight bound on the worst-case number of comparisons for Floyd s heap construction algorithm Ioannis K. Paparrizos School of Computer and Communication Sciences Ècole Polytechnique Fèdèrale de Lausanne

More information

A Dag-Based Algorithm for Distributed Mutual Exclusion. Kansas State University. Manhattan, Kansas maintains [18]. algorithms [11].

A Dag-Based Algorithm for Distributed Mutual Exclusion. Kansas State University. Manhattan, Kansas maintains [18]. algorithms [11]. A Dag-Based Algorithm for Distributed Mutual Exclusion Mitchell L. Neilsen Masaaki Mizuno Department of Computing and Information Sciences Kansas State University Manhattan, Kansas 66506 Abstract The paper

More information

Priority Queues and Binary Heaps. See Chapter 21 of the text, pages

Priority Queues and Binary Heaps. See Chapter 21 of the text, pages Priority Queues and Binary Heaps See Chapter 21 of the text, pages 807-839. A priority queue is a queue-like data structure that assumes data is comparable in some way (or at least has some field on which

More information

Definition of a Heap. Heaps. Priority Queues. Example. Implementation using a heap. Heap ADT

Definition of a Heap. Heaps. Priority Queues. Example. Implementation using a heap. Heap ADT Heaps Definition of a heap What are they for: priority queues Insertion and deletion into heaps Implementation of heaps Heap sort Not to be confused with: heap as the portion of computer memory available

More information

For more Articles Go To: Whatisdbms.com CONCURRENCY CONTROL PROTOCOL

For more Articles Go To: Whatisdbms.com CONCURRENCY CONTROL PROTOCOL For more Articles Go To: Whatisdbms.com CONCURRENCY CONTROL PROTOCOL In the multi-user system, we all know that multiple transactions run in parallel, thus trying to access the same data and suppose if

More information

9. Heap : Priority Queue

9. Heap : Priority Queue 9. Heap : Priority Queue Where We Are? Array Linked list Stack Queue Tree Binary Tree Heap Binary Search Tree Priority Queue Queue Queue operation is based on the order of arrivals of elements FIFO(First-In

More information

The priority is indicated by a number, the lower the number - the higher the priority.

The priority is indicated by a number, the lower the number - the higher the priority. CmSc 250 Intro to Algorithms Priority Queues 1. Introduction Usage of queues: in resource management: several users waiting for one and the same resource. Priority queues: some users have priority over

More information

Algorithms in Systems Engineering ISE 172. Lecture 16. Dr. Ted Ralphs

Algorithms in Systems Engineering ISE 172. Lecture 16. Dr. Ted Ralphs Algorithms in Systems Engineering ISE 172 Lecture 16 Dr. Ted Ralphs ISE 172 Lecture 16 1 References for Today s Lecture Required reading Sections 6.5-6.7 References CLRS Chapter 22 R. Sedgewick, Algorithms

More information

Priority Queues. T. M. Murali. January 26, T. M. Murali January 26, 2016 Priority Queues

Priority Queues. T. M. Murali. January 26, T. M. Murali January 26, 2016 Priority Queues Priority Queues T. M. Murali January 26, 2016 Motivation: Sort a List of Numbers Sort INSTANCE: Nonempty list x 1, x 2,..., x n of integers. SOLUTION: A permutation y 1, y 2,..., y n of x 1, x 2,..., x

More information

[ DATA STRUCTURES ] Fig. (1) : A Tree

[ DATA STRUCTURES ] Fig. (1) : A Tree [ DATA STRUCTURES ] Chapter - 07 : Trees A Tree is a non-linear data structure in which items are arranged in a sorted sequence. It is used to represent hierarchical relationship existing amongst several

More information

CSCI2100B Data Structures Heaps

CSCI2100B Data Structures Heaps CSCI2100B Data Structures Heaps Irwin King king@cse.cuhk.edu.hk http://www.cse.cuhk.edu.hk/~king Department of Computer Science & Engineering The Chinese University of Hong Kong Introduction In some applications,

More information

RECONFIGURATION OF HIERARCHICAL TUPLE-SPACES: EXPERIMENTS WITH LINDA-POLYLITH. Computer Science Department and Institute. University of Maryland

RECONFIGURATION OF HIERARCHICAL TUPLE-SPACES: EXPERIMENTS WITH LINDA-POLYLITH. Computer Science Department and Institute. University of Maryland RECONFIGURATION OF HIERARCHICAL TUPLE-SPACES: EXPERIMENTS WITH LINDA-POLYLITH Gilberto Matos James Purtilo Computer Science Department and Institute for Advanced Computer Studies University of Maryland

More information

Algorithms and Data Structures

Algorithms and Data Structures Algorithms and Data Structures Dr. Malek Mouhoub Department of Computer Science University of Regina Fall 2002 Malek Mouhoub, CS3620 Fall 2002 1 6. Priority Queues 6. Priority Queues ffl ADT Stack : LIFO.

More information

Lecture Notes on Priority Queues

Lecture Notes on Priority Queues Lecture Notes on Priority Queues 15-122: Principles of Imperative Computation Frank Pfenning Lecture 16 October 18, 2012 1 Introduction In this lecture we will look at priority queues as an abstract type

More information

Lecture 2 - Graph Theory Fundamentals - Reachability and Exploration 1

Lecture 2 - Graph Theory Fundamentals - Reachability and Exploration 1 CME 305: Discrete Mathematics and Algorithms Instructor: Professor Aaron Sidford (sidford@stanford.edu) January 11, 2018 Lecture 2 - Graph Theory Fundamentals - Reachability and Exploration 1 In this lecture

More information

Trees. (Trees) Data Structures and Programming Spring / 28

Trees. (Trees) Data Structures and Programming Spring / 28 Trees (Trees) Data Structures and Programming Spring 2018 1 / 28 Trees A tree is a collection of nodes, which can be empty (recursive definition) If not empty, a tree consists of a distinguished node r

More information

DATA STRUCTURES/UNIT 3

DATA STRUCTURES/UNIT 3 UNIT III SORTING AND SEARCHING 9 General Background Exchange sorts Selection and Tree Sorting Insertion Sorts Merge and Radix Sorts Basic Search Techniques Tree Searching General Search Trees- Hashing.

More information

MultiJav: A Distributed Shared Memory System Based on Multiple Java Virtual Machines. MultiJav: Introduction

MultiJav: A Distributed Shared Memory System Based on Multiple Java Virtual Machines. MultiJav: Introduction : A Distributed Shared Memory System Based on Multiple Java Virtual Machines X. Chen and V.H. Allan Computer Science Department, Utah State University 1998 : Introduction Built on concurrency supported

More information

Laboratory Module X B TREES

Laboratory Module X B TREES Purpose: Purpose 1... Purpose 2 Purpose 3. Laboratory Module X B TREES 1. Preparation Before Lab When working with large sets of data, it is often not possible or desirable to maintain the entire structure

More information

V Advanced Data Structures

V Advanced Data Structures V Advanced Data Structures B-Trees Fibonacci Heaps 18 B-Trees B-trees are similar to RBTs, but they are better at minimizing disk I/O operations Many database systems use B-trees, or variants of them,

More information

Multiprocessor Systems. Chapter 8, 8.1

Multiprocessor Systems. Chapter 8, 8.1 Multiprocessor Systems Chapter 8, 8.1 1 Learning Outcomes An understanding of the structure and limits of multiprocessor hardware. An appreciation of approaches to operating system support for multiprocessor

More information

Ensures that no such path is more than twice as long as any other, so that the tree is approximately balanced

Ensures that no such path is more than twice as long as any other, so that the tree is approximately balanced 13 Red-Black Trees A red-black tree (RBT) is a BST with one extra bit of storage per node: color, either RED or BLACK Constraining the node colors on any path from the root to a leaf Ensures that no such

More information

IT 540 Operating Systems ECE519 Advanced Operating Systems

IT 540 Operating Systems ECE519 Advanced Operating Systems IT 540 Operating Systems ECE519 Advanced Operating Systems Prof. Dr. Hasan Hüseyin BALIK (5 th Week) (Advanced) Operating Systems 5. Concurrency: Mutual Exclusion and Synchronization 5. Outline Principles

More information

! Tree: set of nodes and directed edges. ! Parent: source node of directed edge. ! Child: terminal node of directed edge

! Tree: set of nodes and directed edges. ! Parent: source node of directed edge. ! Child: terminal node of directed edge Trees & Heaps Week 12 Gaddis: 20 Weiss: 21.1-3 CS 5301 Fall 2018 Jill Seaman!1 Tree: non-recursive definition! Tree: set of nodes and directed edges - root: one node is distinguished as the root - Every

More information

CS510 Advanced Topics in Concurrency. Jonathan Walpole

CS510 Advanced Topics in Concurrency. Jonathan Walpole CS510 Advanced Topics in Concurrency Jonathan Walpole Threads Cannot Be Implemented as a Library Reasoning About Programs What are the valid outcomes for this program? Is it valid for both r1 and r2 to

More information

quiz heapsort intuition overview Is an algorithm with a worst-case time complexity in O(n) data structures and algorithms lecture 3

quiz heapsort intuition overview Is an algorithm with a worst-case time complexity in O(n) data structures and algorithms lecture 3 quiz data structures and algorithms 2018 09 10 lecture 3 Is an algorithm with a worst-case time complexity in O(n) always faster than an algorithm with a worst-case time complexity in O(n 2 )? intuition

More information

HEAPS: IMPLEMENTING EFFICIENT PRIORITY QUEUES

HEAPS: IMPLEMENTING EFFICIENT PRIORITY QUEUES HEAPS: IMPLEMENTING EFFICIENT PRIORITY QUEUES 2 5 6 9 7 Presentation for use with the textbook Data Structures and Algorithms in Java, 6 th edition, by M. T. Goodrich, R. Tamassia, and M. H., Wiley, 2014

More information

On the other hand, the main disadvantage of the amortized approach is that it cannot be applied in real-time programs, where the worst-case bound on t

On the other hand, the main disadvantage of the amortized approach is that it cannot be applied in real-time programs, where the worst-case bound on t Randomized Meldable Priority Queues Anna Gambin and Adam Malinowski Instytut Informatyki, Uniwersytet Warszawski, Banacha 2, Warszawa 02-097, Poland, faniag,amalg@mimuw.edu.pl Abstract. We present a practical

More information

Submitted for TAU97 Abstract Many attempts have been made to combine some form of retiming with combinational

Submitted for TAU97 Abstract Many attempts have been made to combine some form of retiming with combinational Experiments in the Iterative Application of Resynthesis and Retiming Soha Hassoun and Carl Ebeling Department of Computer Science and Engineering University ofwashington, Seattle, WA fsoha,ebelingg@cs.washington.edu

More information

Quiz 1 Solutions. (a) f(n) = n g(n) = log n Circle all that apply: f = O(g) f = Θ(g) f = Ω(g)

Quiz 1 Solutions. (a) f(n) = n g(n) = log n Circle all that apply: f = O(g) f = Θ(g) f = Ω(g) Introduction to Algorithms March 11, 2009 Massachusetts Institute of Technology 6.006 Spring 2009 Professors Sivan Toledo and Alan Edelman Quiz 1 Solutions Problem 1. Quiz 1 Solutions Asymptotic orders

More information

A Synchronization Algorithm for Distributed Systems

A Synchronization Algorithm for Distributed Systems A Synchronization Algorithm for Distributed Systems Tai-Kuo Woo Department of Computer Science Jacksonville University Jacksonville, FL 32211 Kenneth Block Department of Computer and Information Science

More information

Part 2: Balanced Trees

Part 2: Balanced Trees Part 2: Balanced Trees 1 AVL Trees We could dene a perfectly balanced binary search tree with N nodes to be a complete binary search tree, one in which every level except the last is completely full. A

More information

Multiple Inheritance. Computer object can be viewed as

Multiple Inheritance. Computer object can be viewed as Multiple Inheritance We have seen that a class may be derived from a given parent class. It is sometimes useful to allow a class to be derived from more than one parent, inheriting members of all parents.

More information

Binary Trees

Binary Trees Binary Trees 4-7-2005 Opening Discussion What did we talk about last class? Do you have any code to show? Do you have any questions about the assignment? What is a Tree? You are all familiar with what

More information

CS-537: Midterm Exam (Spring 2001)

CS-537: Midterm Exam (Spring 2001) CS-537: Midterm Exam (Spring 2001) Please Read All Questions Carefully! There are seven (7) total numbered pages Name: 1 Grading Page Points Total Possible Part I: Short Answers (12 5) 60 Part II: Long

More information

CS 350 : Data Structures B-Trees

CS 350 : Data Structures B-Trees CS 350 : Data Structures B-Trees David Babcock (courtesy of James Moscola) Department of Physical Sciences York College of Pennsylvania James Moscola Introduction All of the data structures that we ve

More information

Chapter 5 Concurrency: Mutual Exclusion and Synchronization

Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles Chapter 5 Concurrency: Mutual Exclusion and Synchronization Seventh Edition By William Stallings Designing correct routines for controlling concurrent

More information

Suggested Solutions (Midterm Exam October 27, 2005)

Suggested Solutions (Midterm Exam October 27, 2005) Suggested Solutions (Midterm Exam October 27, 2005) 1 Short Questions (4 points) Answer the following questions (True or False). Use exactly one sentence to describe why you choose your answer. Without

More information

ADAPTIVE SORTING WITH AVL TREES

ADAPTIVE SORTING WITH AVL TREES ADAPTIVE SORTING WITH AVL TREES Amr Elmasry Computer Science Department Alexandria University Alexandria, Egypt elmasry@alexeng.edu.eg Abstract A new adaptive sorting algorithm is introduced. The new implementation

More information

Tree: non-recursive definition. Trees, Binary Search Trees, and Heaps. Tree: recursive definition. Tree: example.

Tree: non-recursive definition. Trees, Binary Search Trees, and Heaps. Tree: recursive definition. Tree: example. Trees, Binary Search Trees, and Heaps CS 5301 Fall 2013 Jill Seaman Tree: non-recursive definition Tree: set of nodes and directed edges - root: one node is distinguished as the root - Every node (except

More information

Database Management Systems

Database Management Systems Database Management Systems Concurrency Control Doug Shook Review Why do we need transactions? What does a transaction contain? What are the four properties of a transaction? What is a schedule? What is

More information

V Advanced Data Structures

V Advanced Data Structures V Advanced Data Structures B-Trees Fibonacci Heaps 18 B-Trees B-trees are similar to RBTs, but they are better at minimizing disk I/O operations Many database systems use B-trees, or variants of them,

More information

B-Trees and External Memory

B-Trees and External Memory Presentation for use with the textbook, Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, 2015 and External Memory 1 1 (2, 4) Trees: Generalization of BSTs Each internal node

More information

Treaps. 1 Binary Search Trees (BSTs) CSE341T/CSE549T 11/05/2014. Lecture 19

Treaps. 1 Binary Search Trees (BSTs) CSE341T/CSE549T 11/05/2014. Lecture 19 CSE34T/CSE549T /05/04 Lecture 9 Treaps Binary Search Trees (BSTs) Search trees are tree-based data structures that can be used to store and search for items that satisfy a total order. There are many types

More information

Resource Sharing & Management

Resource Sharing & Management Resource Sharing & Management P.C.P Bhatt P.C.P Bhatt OS/M6/V1/2004 1 Introduction Some of the resources connected to a computer system (image processing resource) may be expensive. These resources may

More information

Concurrency, Mutual Exclusion and Synchronization C H A P T E R 5

Concurrency, Mutual Exclusion and Synchronization C H A P T E R 5 Concurrency, Mutual Exclusion and Synchronization C H A P T E R 5 Multiple Processes OS design is concerned with the management of processes and threads: Multiprogramming Multiprocessing Distributed processing

More information

! Why is synchronization needed? ! Synchronization Language/Definitions: ! How are locks implemented? Maria Hybinette, UGA

! Why is synchronization needed? ! Synchronization Language/Definitions: ! How are locks implemented? Maria Hybinette, UGA Chapter 6: Process [& Thread] Synchronization CSCI [4 6] 730 Operating Systems Synchronization Part 1 : The Basics! Why is synchronization needed?! Synchronization Language/Definitions:» What are race

More information

A Note on Scheduling Parallel Unit Jobs on Hypercubes

A Note on Scheduling Parallel Unit Jobs on Hypercubes A Note on Scheduling Parallel Unit Jobs on Hypercubes Ondřej Zajíček Abstract We study the problem of scheduling independent unit-time parallel jobs on hypercubes. A parallel job has to be scheduled between

More information

Postfix (and prefix) notation

Postfix (and prefix) notation Postfix (and prefix) notation Also called reverse Polish reversed form of notation devised by mathematician named Jan Łukasiewicz (so really lü-kä-sha-vech notation) Infix notation is: operand operator

More information

Priority Queues and Binary Heaps. See Chapter 21 of the text, pages

Priority Queues and Binary Heaps. See Chapter 21 of the text, pages Priority Queues and Binary Heaps See Chapter 21 of the text, pages 807-839. A priority queue is a queue-like data structure that assumes data is comparable in some way (or at least has some field on which

More information

Concurrent Objects and Linearizability

Concurrent Objects and Linearizability Chapter 3 Concurrent Objects and Linearizability 3.1 Specifying Objects An object in languages such as Java and C++ is a container for data. Each object provides a set of methods that are the only way

More information

Chapter 6: Process [& Thread] Synchronization. CSCI [4 6] 730 Operating Systems. Why does cooperation require synchronization?

Chapter 6: Process [& Thread] Synchronization. CSCI [4 6] 730 Operating Systems. Why does cooperation require synchronization? Chapter 6: Process [& Thread] Synchronization CSCI [4 6] 730 Operating Systems Synchronization Part 1 : The Basics Why is synchronization needed? Synchronization Language/Definitions:» What are race conditions?»

More information

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy Operating Systems Designed and Presented by Dr. Ayman Elshenawy Elsefy Dept. of Systems & Computer Eng.. AL-AZHAR University Website : eaymanelshenawy.wordpress.com Email : eaymanelshenawy@yahoo.com Reference

More information