Funnel Heap - A Cache Oblivious Priority Queue

Size: px
Start display at page:

Download "Funnel Heap - A Cache Oblivious Priority Queue"

Transcription

1 Alcom-FT Technical Report Series ALCOMFT-TR Funnel Heap - A Cache Oblivious Priority Queue Gerth Stølting Brodal, Rolf Fagerberg Abstract The cache oblivious model of computation is a two-level memory model with the assumption that the parameters of the model are unknown to the algorithms. A consequence of this assumption is that an algorithm efficient in the cache oblivious model is automatically efficient in a multi-level memory model. Arge et al. recently presented the first optimal cache oblivious priority queue, and demonstrated the importance of this result by providing the first cache oblivious algorithms for graph problems. Their structure uses cache oblivious sorting and selection as subroutines. In this paper, we devise an alternative optimal cache oblivious priority queue based only on binary merging. We also show that our structure can be made adaptive to different usage profiles. Keywords: Cache oblivious algorithms, priority queues, memory hierarchies, merging BRICS (Basic Research in Computer Science, funded by the Danish National Research Foundation), Department of Computer Science, University of Aarhus, Ny Munkegade, DK-8000 Århus C, Denmark. {gerth,rolf}@brics.dk. Partially supported by the Future and Emerging Technologies programme of the EU under contract number IST (ALCOM-FT). Supported by the Carlsberg Foundation (contract number ANS-0257/20). 1

2 1 Introduction External memory models are formal models for analyzing the impact of the memory access patterns of algorithms in the presence of several levels of memory and caches on modern computer architectures. The cache oblivious model, recently introduced by Frigo et al. [10], is based on the I/O model of Aggarwal and Vitter [1], which has been the most widely used model for developing external memory algorithms; see the survey by Vitter [11]. Both models assume a two-level memory hierarchy where the lower level has size M and data is transfered between the two levels in blocks of B elements. The difference is that in the I/O model the algorithms are aware of B and M, whereas in the cache oblivious model these parameters are unknown to the algorithms and I/Os are handled automatically by an optimal off-line cache replacement strategy. Frigo et al. [10] showed that an efficient algorithm in the cache oblivious model is automatically efficient on an each level of a multi-level memory model. They also presented optimal cache oblivious algorithms for matrix transposition, FFT, and sorting. Cache oblivious search trees which match the search cost of the standard (cache aware) B-trees [3] were presented in [4, 5, 6, 8]. Cache oblivious algorithms for computational geometry problems were developed in [4, 7]. The first cache oblivious priority queue was recently developed by Arge et al. [2], and gave rise to several cache oblivious graph algorithms [2]. Their structure uses existing cache oblivious sorting and selection algorithms as subroutines. In this paper, we present an alternative optimal cache oblivious priority queue, Funnel Heap, based only on binary merging. Essentially, our structure is a single heap-ordered tree with binary mergers in the nodes and buffers on the edges. It was inspired by the cache oblivious merge-sort algorithm Funnelsort presented in [10] and simplified in [7]. As for the priority queue of Arge et al., our data structure supports the operations Insert and DeleteMin using amortized O( 1 B log M/B N B ) I/Os per operation, under the so-called tall cache assumption M B 2. Here, N is the total number of elements inserted. For a slightly different algorithm we give a refined analysis, showing that the priority queue adapts to different profiles of usage. More precisely, we show that the ith insertion uses amortized O( 1 B log M/B N i B ) I/Os, with N i defined in one of the following three ways: (a) the number of elements present in the priority queue when performing the ith insert operation, (b) if the ith inserted element is removed by a DeleteMin operation prior to the jth insertion then N i = j i, or (c) N i is the maximum rank that 2

3 the ith inserted element has during its lifetime in the priority queue, where rank denotes the number of smaller elements present in the priority queue. DeleteMin is amortized for free since the work is charged to the insertions. These results extends the line of research taken in [9], where (a) and (c) are called size profile and max depth profile, respectively. This paper is organized as follows. In Section 2 we introduce the concept of mergers and in Section 3 we describe our priority queue. Section 4 gives the analysis of the presented data structure. Finally, Section 5 gives the analysis based on different profiles of usage. 2 Mergers Our data structure is based on binary mergers. A binary merger takes as input two sorted streams of elements and delivers as output the sorted stream formed by merging of these. One merge step moves an element from the head of one of the input streams to the tail of the output stream. The heads of the input streams and the tail of the output stream reside in buffers holding a limited number of elements. A buffer is simply an array of elements, plus fields storing the capacity of the buffer and pointers to the first and last elements in the buffer. Binary mergers may be combined to binary merge trees by letting the output buffer of one merger be an input buffer of another in other words, binary merge trees are binary trees with mergers at the nodes and buffers at the edges. The leaves of the tree contains the streams to be merged. Procedure Fill(v) while v s output buffer is not full if left input buffer empty Fill(left child of v) if right input buffer empty Fill(right child of v) perform one merge step Figure 1: The merging algorithm An invocation of a merger is a recursive procedure which performs merge steps until its output buffer is full (or both input streams are exhausted). 3

4 Input buffers Output buffer Figure 2: A 16-merger consisting of 15 binary mergers. Shaded regions are the occupied parts of the buffers. If during the invocation an input buffer gets empty (but the corresponding stream is not exhausted), the input buffer is recursively filled by an invocation of the merger having this buffer as its output buffer. If both input streams of a merger get exhausted, the corresponding output stream is marked as exhausted. The procedure (except for the issue of exhaustion) is shown in Figure 1 as the procedure Fill(v). A single invocation Fill(r) on the root r of the merge tree will produce a stream which is the merge of the streams at the leaves of the tree. One particular merge tree is the k-merger. In this paper, k = 2 i for some positive integer i. A k-merger is a perfect binary tree of k 1 binary mergers with appropriate sized buffers on the edges, k input streams, and an output buffer at the root of size k 3. A 16-merger is illustrated in Figure 2. The sizes of the buffers are defined recursively: Let the top tree be the subtree consisting of all nodes of depth at most i/2, and let the subtrees rooted by nodes at depth i/2 + 1 be the bottom trees. The edges between nodes at depth i/2 and depth i/2 + 1 have associated buffers of size k 3/2, and the sizes of the remaining buffers is defined by recursion on the top tree and the bottom trees. To achieve I/O efficiency in the cache oblivious model, the layout of k- merger is also defined recursively. The entire k-merger is laid out in memory in contiguous locations, first the top tree, then the middle buffers, and finally the bottom trees, and this layout is applied recursively within the top and 4

5 I A 1 v 1 B 1 S 11 S 12 A i v i A i+1 B i K i S i1 S i2 S iki Figure 3: The priority queue based on binary mergers. each of the bottom trees. The k-merger structure was defined by Frigo et al. [10] for use in their cache oblivious mergesort algorithm Funnelsort. The algorithm described above for invoking a k-merger appeared in [7], and is a simplification of the original one. For both algorithms, the following lemma holds [7, 10]. Lemma 1 The invocation of the root of a k-merger uses O(k+ k3 B log M/B k 3 ) I/Os. The space required for a k-merger is O(k 2 ), not counting the space for the input and output streams. 3 The priority queue In this section, we present a priority queue based solely on binary mergers. Our data structure consists of a sequence of k-mergers of doubleexponentially increasing k, linked together in a list by binary mergers and buffers as depicted in Figure 3. In the figure, circles denote binary mergers, rectangles denote buffers, and triangles denote k-mergers. Note that the entire structure constitutes a single binary merge tree. More precisely, let k i and s i be values defined inductively as follows, (k 1,s 1 ) = (2,8), s i+1 = s i (k i + 1), k i+1 = s i+1 1/3, (1) 5

6 where x denotes the smallest power of two above x, i.e. x = 2 log x. Link i in the linked list consists of a binary merger v i, two buffers A i and B i, and a k i -merger K i with k i input buffers S i1,...,s iki. The output buffer of v i is A i, and its input buffers are B i and the buffer A i+1 from the next link. The output buffer of K i is B i. The size of both A i and B i is k 3 i, and the size of each S ij is s i. Link i has an associated counter c i, which is an integer between one and k i + 1 (inclusive), initially of value one. It will be an invariant that S ici,...,s iki are empty. Additionally, the structure contains one insertion buffer I of size s 1. All buffers contain a (possibly empty) sorted sequence of elements. The structure is laid out in memory in the order I, link 1, link 2,..., and within link i the layout order is A i, B i, K i, S i1, S i2,..., S iki. The linked list of buffers and mergers constitute one binary tree T with root v 1 and with sorted sequences of elements on the edges. We maintain the invariant that this tree is heap-ordered, i.e. when traversing any path towards the root, elements will be passed in decreasing order. Note that the invocation of a binary merger maintains this invariant. The invariant implies that if buffer A 1 is non-empty, the minimum element in the queue will be in A 1 or in I. To perform a DeleteMin operation, we first check whether buffer A 1 is empty, and if so invoke Fill(v 1 ). We then compare the smallest element in A 1 to the smallest element (if any) in I, remove the smaller of these from its buffer, and return it. To perform an Insert operation, the new element is inserted in I while rearranging the contents of the buffer to maintain sorted order. If the number of elements in I is now seven or less, we stop. If the number of elements in I becomes eight, we perform the following sweep operation, where i is the lowest index for which c i k i. The sweep operation will move the content of links 1,...,i 1 to the destination buffer S ici. It traverses the path p in T from A 1 to S ici and for each buffer on this path records how many elements it currently contains. It forms a sorted stream σ 1 of the elements in the buffers on the part of p from A i to S ici by traversing that part of the path. It forms a sorted stream σ 2 of all elements in links 1,...,i 1 and in buffer I by marking A i as exhausted and calling DeleteMin repeatedly. It then merges σ 1 and σ 2 into a single stream σ, inserts the front elements of σ in the buffers on the path p during a downwards traversal in such a way that all these buffers contain the same numbers of elements as before the insertion, and inserts the remaining part of σ in S ici. Finally, it resets c l to one for l = 1,2,...,i 1 and increments c i. 6

7 4 Analysis 4.1 Correctness Correctness of DeleteMin is immediate by the heap-order invariant. For Insert we must show that the invariant is maintained and that S ici does not overflow during a sweep ending in link i. After an Insert, the new contents in the buffers on the path p are the smallest elements in σ, with a distribution exactly as the old contents. Hence, an element on this path can only be smaller than the element occupying the same location before the operation. It follows that heap-order is maintained. Call B l, K l, and S l1,...,s lkl the lower part of link l. The lower part of link l is emptied each time c l is reset, so it never contains more than the number of elements inserted into S l1,s l2,...,s lkl since last time c l was reset. If follows by induction on i that the number of elements inserted into S ici during a sweep ending in link i is at most s 1 + i 1 j=1 k js j = s i. 4.2 Complexity Most of the work performed is the movement of elements upwards in the tree T during the invocations of the binary mergers in T. In the analysis, we account for the I/Os incurred during the filling of an output buffer of a merger by charging them evenly to the elements filled into the buffer. The exception is when an A i or B i buffer is not filled completely due to exhaustion of the corresponding input buffers, where we account for the I/Os by other means. We claim that the number of I/Os charged to an element during its ascent in T from an input stream of K i to the buffer A 1 is O( 1 B log M/B s i ), if we identify elements residing in buffers on the path p before a sweep with those residing at the same positions in these buffers after the sweep. To prove the claim, we assume that the maximal number of small links are kept in cache always the optimal cache replacement strategy of the cache oblivious model can only incur fewer I/Os. More precisely, let i be the space occupied by links 1 to i. From (1) we have s i 1/3 k i < 2s i 1/3, so the Θ(s i k i ) space usage of S i1,...,s iki is Θ(k i 4 ), which dominates the space usage of link i. Also from (1) we have s i 4/3 < s i+1 < 3s i 4/3, so s i and k i grows doubly-exponentially with i. Hence, i is dominated by the space usage of link i, implying i = Θ(k i 4 ). We let i M be the largest i for which i M and assume that links 1 to i M are kept in cache always. 7

8 Consider the ascent of an element from K i to B i for i > i M. By Lemma 1, each invocation of K i incurs O(k i + k i 3 B log M/B k 3 i ) I/Os. From M < im +1 and the above discussion, we have M = O(k 4 i ). The tall cache assumption B 2 M gives B = O(k 2 i ), which implies k i = O(k 3 i /B). As we are not counting invocations of K i where B i is not filled completely, i.e. where K i is exhausted, it follows that each element is charged O( 1 B log M/B k 3 i ) = O( 1 B log M/B s i ) I/Os to ascend through K i and into B i. The element can also be charged during insertion into A j for j = i M,...,i. The filling of A j incurs O(1 + A j /B) I/Os. From B = O(k im +1 2 ) = O(k 8/3 im ) and A j = k 3 j, we see that the last term dominates. Therefore an element is charged O(1/B) per buffer A j, as we only charge when the buffer is filled completely. From M = O(k im +1 4 ) = O(s 16/9 im ), we by the doubly-exponentially growth of the s j values have i i M = O(log log M s i ) = O(log M s i ) = O(log M/B s i ), where the last equality follows from the tall cache assumption. Hence, the ascent through K i dominates over insertions into the A j s, and the claim is proved. To prove the complexity stated in the introduction, we note that by induction on i, at least s i insertions take place between each sweep ending in link i. A sweep ending in link i inserts at most s i elements in S ici. We let the last s i insertions preceeding the sweep pay for the I/Os charged to these elements during their later ascent through T. By the claim above, this cost is O( 1 B log M/B s i ) I/Os per insertion. We also let these insertions pay for the I/Os incurred by the sweep during the formation and placement of streams σ 1, σ 2, and σ, and for I/Os incurred by filling buffers which become exhausted. We claim that these can be covered without altering the O( 1 B log M/B s i ) cost per insertion. The claim is proved as follows. The formation of σ 1 is done by a traversal of the path p. By the specified layout of the data structure (including the layout of k-mergers), this traversal is part of a linear scan of the part of memory between A 1 and the end of K i. Such a scan takes O(( i 1 + A i + B i + K i )/B) = O(k 3 i /B) = O(s i /B) I/Os. The formation of σ 2 has already been accouted for by charging ascending elements. The merge of σ 1 and σ 2 into σ and the placement of σ are not more costly than a traversal of p and S ici, and hence also incur O(s i /B) I/Os. To account for the I/Os incurred when filling buffers which become exhausted, we note that B i, and therefore also A i, can only become exhausted once between each sweep ending in link i. From A i = B i = k 3 i = Θ(s i ) it follows that charging each sweep ending in link i an additional cost of O( s i B log M/B s i ) will cover all such fillings, and the claim is proved. 8

9 In summary, charging the last s i insertions preceeding a sweep ending in link i a cost of O( 1 B log M/B s i ) I/Os each will cover all I/Os incurred by the data structure. Given a sequence of operation on an initial empty priority queue, let i max be the largest i for which a sweep ending in link i takes place. We have s imax N, where N is the number of insertions in the sequence. An insertion can be charged by at most one sweep ending in link i for i = 1,...,i max, so by the doubly-exponentially growth of s i, the number of I/Os charged to an insertion is O ( k=0 ) ( ) 1 1 B log M/B N (3/4)k = O B log M/B N 5 Profile Adaptive Performance To make the performance dependent on N l, we make the following changes to our priority queue. Let r i denote the number of elements residing in the lower part of link i. The value of r i is stored at v i and will only need to be updated when removing an element from B i and when a sweep operation creates a new S ij list (in the later case r 1,...,r i 1 are reset to zero). The only other modification is the following change to the sweep operation of the insertion algorithm. Instead of finding the lowest index i where c i k i, we find the lowest index i where either c i k i or r i k i s i /2. If c i k i, the sweep operation proceeds as described Section 3, and c i is incremented by one. Otherwise c i = k i + 1 and r i k i s i /2, in which case we will recycle one of the S ij buffers. If there exists an input buffer S ij which is empty, we use S ij as the destination buffer for the sweep operation. If all S ij are nonempty, the two input buffers S ij1 and S ij2 with the smallest number of elements have a total of at most s i elements. Assume without loss of generality min S ij1 min S ij2. We merge the content of S ij1 and S ij2 into S ij2. By merging from the largest elements this merge only requires reading and writing each element one. Since mins ij1 min S ij2 the heap order remains satisfied. Finally we apply the sweep operation with S ij1 as the destination buffer. 5.1 Analysis The correctness follows as in Section 4. For the complexity, we as in Section 4 only have to consider the case where i > i M. We note that in the modified algorithm, the additional number of I/Os required by a sweep operation is O(k i + s i /B) I/Os, which is dominated by the O( s i B log M/B s i ). 9

10 I/Os required to perform the sweep operation. We will argue that the sweep operation collects Ω(s i ) elements from links 1,...,i 1 which have been inserted since the last sweep ending in link i or larger. Furthermore, for half of these elements the value N l is Ω(s i ). The claimed amortized complexity O( 1 B log M/B N l ) then follows as in Section 4, except that we now charge the cost of the sweep operation to these Ω(s i ) elements. The main property of the modified algorithm is captured by the following invariant: For each i, the links 1,...,i contain in total at most ij=1 A j = i j=1 kj 3 elements which have been removed from A i+1 by the binary merger v i since the last sweep ending a link i + 1 or larger. After a sweep ending at link i + 1, we define all elements in A j to have been removed from A l, where 1 j < l i + 1. When an element e is removed from A i+1 and is output to A i, then all elements in the lower part of link i must be larger than e. All elements removed from A i+1 since the last sweep operation ending at link i+1 or larger were smaller than e. These elements must either be stored in A i or have been removed from A i by the merger in v i 1. It follows that at most i 1 j=1 A j + A i 1 elements removed from A j+1 are present in links 1,...,i. Hence, the invariant remains valid after moving e from A i+1 to A i. By definition, the invariant remains valid after a sweep operation. A sweep operation ending at link i will create a list with at least s 1 + i 1 j=1 k js j /2 s i /2 elements. From the above invariant at least t = s i /2 i 1 j=1 k3 j = Ω(s i ) elements must have been inserted since the last sweep operation ending at link i or larger. Finally, for each of the three definitions of N l in Section 1 we for at least t/2 of the elements have N l t/2: (a) For each of the t/2 most recently inserted elements, at least t/2 elements were inserted when these elements where inserted. (b) For each of the t/2 earliest inserted elements, at least t/2 other elements have been inserted before they themselves get deleted. (c) The t/2 largest elements each have (maximum) rank at least t/2. This proves the complexity stated in Section 1. References [1] A. Aggarwal and J. S. Vitter. The input/output complexity of sorting and related problems. Communications of the ACM, 31(9): , Sept

11 [2] L. Arge, M. A. Bender, E. D. Demaine, B. Holland-Minkley, and J. I. Munro. Cache-oblivious priority queue and graph algorithm applications. In Proc. 34th Ann. ACM Symp. on Theory of Computing. ACM Press, To appear. [3] R. Bayer and E. McCreight. Organization and maintenance of large ordered indexes. Acta Informatica, 1: , [4] M. Bender, R. Cole, and R. Raman. Exponential structures for cacheoblivious algorithms. In Proc. 29th International Colloquium on Automata, Languages, and Programming (ICALP), To appear. [5] M. A. Bender, E. Demaine, and M. Farach-Colton. Cache-oblivious B- trees. In Proc. 41st Ann. Symp. on Foundations of Computer Science, pages IEEE Computer Society Press, [6] M. A. Bender, Z. Duan, J. Iacono, and J. Wu. A locality-preserving cache-oblivious dynamic dictionary. In Proc. 13th Ann. ACM-SIAM Symp. on Discrete Algorithms, pages 29 39, [7] G. S. Brodal and R. Fagerberg. Cache oblivious distribution sweeping. In Proc. 29th International Colloquium on Automata, Languages, and Programming (ICALP), To appear. [8] G. S. Brodal, R. Fagerberg, and R. Jacob. Cache oblivious search trees via binary trees of small height. In Proc. 13th Ann. ACM-SIAM Symp. on Discrete Algorithms, pages 39 48, [9] M. J. Fischer and M. S. Paterson. Fishspear: A priority queue algorithm. Journal of the ACM, 41(1):3 30, [10] M. Frigo, C. E. Leiserson, H. Prokop, and S. Ramachandran. Cacheoblivious algorithms. In 40th Annual Symposium on Foundations of Computer Science, pages IEEE Computer Society Press, [11] J. S. Vitter. External memory algorithms and data structures: Dealing with massive data. ACM Computing Surveys, 33(2): , June

38 Cache-Oblivious Data Structures

38 Cache-Oblivious Data Structures 38 Cache-Oblivious Data Structures Lars Arge Duke University Gerth Stølting Brodal University of Aarhus Rolf Fagerberg University of Southern Denmark 38.1 The Cache-Oblivious Model... 38-1 38.2 Fundamental

More information

Cache Oblivious Distribution Sweeping

Cache Oblivious Distribution Sweeping Cache Oblivious Distribution Sweeping Gerth Stølting Brodal 1,, and Rolf Fagerberg 1, BRICS, Department of Computer Science, University of Aarhus, Ny Munkegade, DK-8000 Århus C, Denmark. E-mail: {gerth,rolf}@brics.dk

More information

Lecture 19 Apr 25, 2007

Lecture 19 Apr 25, 2007 6.851: Advanced Data Structures Spring 2007 Prof. Erik Demaine Lecture 19 Apr 25, 2007 Scribe: Aditya Rathnam 1 Overview Previously we worked in the RA or cell probe models, in which the cost of an algorithm

More information

Simple and Semi-Dynamic Structures for Cache-Oblivious Planar Orthogonal Range Searching

Simple and Semi-Dynamic Structures for Cache-Oblivious Planar Orthogonal Range Searching Simple and Semi-Dynamic Structures for Cache-Oblivious Planar Orthogonal Range Searching ABSTRACT Lars Arge Department of Computer Science University of Aarhus IT-Parken, Aabogade 34 DK-8200 Aarhus N Denmark

More information

Engineering a Cache-Oblivious Sorting Algorithm

Engineering a Cache-Oblivious Sorting Algorithm Alcom-FT Technical Report Series ALCOMFT-TR-03-101 Engineering a Cache-Oblivious Sorting Algorithm Gerth Stølting Brodal, Rolf Fagerberg Kristoffer Vinther Abstract The cache-oblivious model of computation

More information

Lecture 8 13 March, 2012

Lecture 8 13 March, 2012 6.851: Advanced Data Structures Spring 2012 Prof. Erik Demaine Lecture 8 13 March, 2012 1 From Last Lectures... In the previous lecture, we discussed the External Memory and Cache Oblivious memory models.

More information

Lecture 9 March 15, 2012

Lecture 9 March 15, 2012 6.851: Advanced Data Structures Spring 2012 Prof. Erik Demaine Lecture 9 March 15, 2012 1 Overview This is the last lecture on memory hierarchies. Today s lecture is a crossover between cache-oblivious

More information

Report on Cache-Oblivious Priority Queue and Graph Algorithm Applications[1]

Report on Cache-Oblivious Priority Queue and Graph Algorithm Applications[1] Report on Cache-Oblivious Priority Queue and Graph Algorithm Applications[1] Marc André Tanner May 30, 2014 Abstract This report contains two main sections: In section 1 the cache-oblivious computational

More information

Cache-oblivious comparison-based algorithms on multisets

Cache-oblivious comparison-based algorithms on multisets Cache-oblivious comparison-based algorithms on multisets Arash Farzan 1, Paolo Ferragina 2, Gianni Franceschini 2, and J. Ian unro 1 1 {afarzan, imunro}@uwaterloo.ca School of Computer Science, University

More information

Engineering a Cache-Oblivious Sorting Algorithm

Engineering a Cache-Oblivious Sorting Algorithm Engineering a Cache-Oblivious Sorting Algorithm Gerth Stølting Brodal, Rolf Fagerberg Kristoffer Vinther Abstract This paper is an algorithmic engineering study of cacheoblivious sorting. We investigate

More information

Lecture April, 2010

Lecture April, 2010 6.851: Advanced Data Structures Spring 2010 Prof. Eri Demaine Lecture 20 22 April, 2010 1 Memory Hierarchies and Models of Them So far in class, we have wored with models of computation lie the word RAM

More information

Report Seminar Algorithm Engineering

Report Seminar Algorithm Engineering Report Seminar Algorithm Engineering G. S. Brodal, R. Fagerberg, K. Vinther: Engineering a Cache-Oblivious Sorting Algorithm Iftikhar Ahmad Chair of Algorithm and Complexity Department of Computer Science

More information

BRICS Research Activities Algorithms

BRICS Research Activities Algorithms BRICS Research Activities Algorithms Gerth Stølting Brodal BRICS Retreat, Sandbjerg, 21 23 October 2002 1 Outline of Talk The Algorithms Group Courses Algorithm Events Expertise within BRICS Examples Algorithms

More information

Lecture 7 8 March, 2012

Lecture 7 8 March, 2012 6.851: Advanced Data Structures Spring 2012 Lecture 7 8 arch, 2012 Prof. Erik Demaine Scribe: Claudio A Andreoni 2012, Sebastien Dabdoub 2012, Usman asood 2012, Eric Liu 2010, Aditya Rathnam 2007 1 emory

More information

Cache-Oblivious Planar Orthogonal Range Searching and Counting

Cache-Oblivious Planar Orthogonal Range Searching and Counting Cache-Oblivious Planar Orthogonal Range Searching and Counting Lars Arge BRICS Dept. of Computer Science University of Aarhus IT Parken, Aabogade 34 8200 Aarhus N, Denmark large@daimi.au.dk Gerth Stølting

More information

Cache-Aware and Cache-Oblivious Adaptive Sorting

Cache-Aware and Cache-Oblivious Adaptive Sorting Cache-Aware and Cache-Oblivious Adaptive Sorting Gerth Stølting rodal 1,, Rolf Fagerberg 2,, and Gabriel Moruz 1 1 RICS, Department of Computer Science, University of Aarhus, IT Parken, Åbogade 34, DK-8200

More information

INSERTION SORT is O(n log n)

INSERTION SORT is O(n log n) Michael A. Bender Department of Computer Science, SUNY Stony Brook bender@cs.sunysb.edu Martín Farach-Colton Department of Computer Science, Rutgers University farach@cs.rutgers.edu Miguel Mosteiro Department

More information

Algorithms and Data Structures: Efficient and Cache-Oblivious

Algorithms and Data Structures: Efficient and Cache-Oblivious 7 Ritika Angrish and Dr. Deepak Garg Algorithms and Data Structures: Efficient and Cache-Oblivious Ritika Angrish* and Dr. Deepak Garg Department of Computer Science and Engineering, Thapar University,

More information

Cache-Oblivious String Dictionaries

Cache-Oblivious String Dictionaries Cache-Oblivious String Dictionaries Gerth Stølting Brodal Rolf Fagerberg Abstract We present static cache-oblivious dictionary structures for strings which provide analogues of tries and suffix trees in

More information

Lecture 24 November 24, 2015

Lecture 24 November 24, 2015 CS 229r: Algorithms for Big Data Fall 2015 Prof. Jelani Nelson Lecture 24 November 24, 2015 Scribes: Zhengyu Wang 1 Cache-oblivious Model Last time we talked about disk access model (as known as DAM, or

More information

Engineering a Cache-Oblivious Sorting Algorithm

Engineering a Cache-Oblivious Sorting Algorithm 2.2 Engineering a Cache-Oblivious Sorting Algorithm 2.2 GERTH STØLTING BRODAL University of Aarhus ROLF FAGERBERG University of Southern Denmark, Odense and KRISTOFFER VINTHER Systematic Software Engineering

More information

Cache-Oblivious String Dictionaries

Cache-Oblivious String Dictionaries Cache-Oblivious String Dictionaries Gerth Stølting Brodal University of Aarhus Joint work with Rolf Fagerberg #"! Outline of Talk Cache-oblivious model Basic cache-oblivious techniques Cache-oblivious

More information

Cache-Oblivious Algorithms A Unified Approach to Hierarchical Memory Algorithms

Cache-Oblivious Algorithms A Unified Approach to Hierarchical Memory Algorithms Cache-Oblivious Algorithms A Unified Approach to Hierarchical Memory Algorithms Aarhus University Cache-Oblivious Current Trends Algorithms in Algorithms, - A Unified Complexity Approach to Theory, Hierarchical

More information

Cache-Oblivious Data Structures and Algorithms for Undirected Breadth-First Search and Shortest Paths

Cache-Oblivious Data Structures and Algorithms for Undirected Breadth-First Search and Shortest Paths Cache-Oblivious Data Structures and Algorithms for Undirected Breadth-First Search and Shortest Paths Gerth Stølting Brodal 1,,, Rolf Fagerberg 2,, Ulrich Meyer 3,,, and Norbert Zeh 4 1 BRICS, Department

More information

arxiv: v1 [cs.ds] 1 May 2015

arxiv: v1 [cs.ds] 1 May 2015 Strictly Implicit Priority Queues: On the Number of Moves and Worst-Case Time Gerth Stølting Brodal, Jesper Sindahl Nielsen, and Jakob Truelsen arxiv:1505.00147v1 [cs.ds] 1 May 2015 MADALGO, Department

More information

Cache Oblivious Search Trees via Binary Trees of Small Height

Cache Oblivious Search Trees via Binary Trees of Small Height Alcom-FT Technical Report Series ALCOMFT-TR-02-53 Cache Oblivious Search Trees via Binary Trees of Small Height Gerth Stølting Brodal, Rolf Fagerberg Riko Jacob Abstract We propose a version of cache oblivious

More information

3.2 Cache Oblivious Algorithms

3.2 Cache Oblivious Algorithms 3.2 Cache Oblivious Algorithms Cache-Oblivious Algorithms by Matteo Frigo, Charles E. Leiserson, Harald Prokop, and Sridhar Ramachandran. In the 40th Annual Symposium on Foundations of Computer Science,

More information

ICS 691: Advanced Data Structures Spring Lecture 8

ICS 691: Advanced Data Structures Spring Lecture 8 ICS 691: Advanced Data Structures Spring 2016 Prof. odari Sitchinava Lecture 8 Scribe: Ben Karsin 1 Overview In the last lecture we continued looking at arborally satisfied sets and their equivalence to

More information

Cache-Oblivious R-Trees

Cache-Oblivious R-Trees Cache-Oblivious R-Trees Lars Arge BRICS, Department of Computer Science, University of Aarhus IT-Parken, Aabogade 34, DK-8200 Aarhus N, Denmark. large@daimi.au.dk Mark de Berg Department of Computing Science,

More information

Buffer Heap Implementation & Evaluation. Hatem Nassrat. CSCI 6104 Instructor: N.Zeh Dalhousie University Computer Science

Buffer Heap Implementation & Evaluation. Hatem Nassrat. CSCI 6104 Instructor: N.Zeh Dalhousie University Computer Science Buffer Heap Implementation & Evaluation Hatem Nassrat CSCI 6104 Instructor: N.Zeh Dalhousie University Computer Science Table of Contents Introduction...3 Cache Aware / Cache Oblivious Algorithms...3 Buffer

More information

arxiv: v2 [cs.ds] 1 Jul 2014

arxiv: v2 [cs.ds] 1 Jul 2014 Cache-Oblivious Persistence Pooya Davoodi 1, Jeremy T. Fineman 2, John Iacono 1, and Özgür Özkan1 1 New York University 2 Georgetown University arxiv:1402.5492v2 [cs.ds] 1 Jul 2014 Abstract Partial persistence

More information

Cache-Oblivious Persistence

Cache-Oblivious Persistence Cache-Oblivious Persistence Pooya Davoodi 1,, Jeremy T. Fineman 2,, John Iacono 1,,andÖzgür Özkan 1 1 New York University 2 Georgetown University Abstract. Partial persistence is a general transformation

More information

6.895 Final Project: Serial and Parallel execution of Funnel Sort

6.895 Final Project: Serial and Parallel execution of Funnel Sort 6.895 Final Project: Serial and Parallel execution of Funnel Sort Paul Youn December 17, 2003 Abstract The speed of a sorting algorithm is often measured based on the sheer number of calculations required

More information

Lecture Notes: External Interval Tree. 1 External Interval Tree The Static Version

Lecture Notes: External Interval Tree. 1 External Interval Tree The Static Version Lecture Notes: External Interval Tree Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk This lecture discusses the stabbing problem. Let I be

More information

The History of I/O Models Erik Demaine

The History of I/O Models Erik Demaine The History of I/O Models Erik Demaine MASSACHUSETTS INSTITUTE OF TECHNOLOGY Memory Hierarchies in Practice CPU 750 ps Registers 100B Level 1 Cache 100KB Level 2 Cache 1MB 10GB 14 ns Main Memory 1EB-1ZB

More information

Efficient pebbling for list traversal synopses

Efficient pebbling for list traversal synopses Efficient pebbling for list traversal synopses Yossi Matias Ely Porat Tel Aviv University Bar-Ilan University & Tel Aviv University Abstract 1 Introduction 1.1 Applications Consider a program P running

More information

Lecture 22 November 19, 2015

Lecture 22 November 19, 2015 CS 229r: Algorithms for ig Data Fall 2015 Prof. Jelani Nelson Lecture 22 November 19, 2015 Scribe: Johnny Ho 1 Overview Today we re starting a completely new topic, which is the external memory model,

More information

Cache Oblivious Matrix Transposition: Simulation and Experiment

Cache Oblivious Matrix Transposition: Simulation and Experiment Cache Oblivious Matrix Transposition: Simulation and Experiment Dimitrios Tsifakis, Alistair P. Rendell * and Peter E. Strazdins Department of Computer Science Australian National University Canberra ACT0200,

More information

Lecture 6: External Interval Tree (Part II) 3 Making the external interval tree dynamic. 3.1 Dynamizing an underflow structure

Lecture 6: External Interval Tree (Part II) 3 Making the external interval tree dynamic. 3.1 Dynamizing an underflow structure Lecture 6: External Interval Tree (Part II) Yufei Tao Division of Web Science and Technology Korea Advanced Institute of Science and Technology taoyf@cse.cuhk.edu.hk 3 Making the external interval tree

More information

Massive Data Algorithmics. Lecture 12: Cache-Oblivious Model

Massive Data Algorithmics. Lecture 12: Cache-Oblivious Model Typical Computer Hierarchical Memory Basics Data moved between adjacent memory level in blocks A Trivial Program A Trivial Program: d = 1 A Trivial Program: d = 1 A Trivial Program: n = 2 24 A Trivial

More information

CSE 638: Advanced Algorithms. Lectures 18 & 19 ( Cache-efficient Searching and Sorting )

CSE 638: Advanced Algorithms. Lectures 18 & 19 ( Cache-efficient Searching and Sorting ) CSE 638: Advanced Algorithms Lectures 18 & 19 ( Cache-efficient Searching and Sorting ) Rezaul A. Chowdhury Department of Computer Science SUNY Stony Brook Spring 2013 Searching ( Static B-Trees ) A Static

More information

Optimal Cache-Oblivious Implicit Dictionaries

Optimal Cache-Oblivious Implicit Dictionaries Optimal Cache-Oblivious Implicit Dictionaries Gianni Franceschini and Roberto Grossi Dipartimento di Informatica, Università di Pisa via Filippo Buonarroti 2, 56127 Pisa, Italy {francesc,grossi}@di.unipi.it

More information

Cache-Oblivious Index for Approximate String Matching

Cache-Oblivious Index for Approximate String Matching Cache-Oblivious Index for Approximate String Matching Wing-Kai Hon 1, Tak-Wah Lam 2, Rahul Shah 3, Siu-Lung Tam 2, and Jeffrey Scott Vitter 3 1 Department of Computer Science, National Tsing Hua University,

More information

CS2223: Algorithms Sorting Algorithms, Heap Sort, Linear-time sort, Median and Order Statistics

CS2223: Algorithms Sorting Algorithms, Heap Sort, Linear-time sort, Median and Order Statistics CS2223: Algorithms Sorting Algorithms, Heap Sort, Linear-time sort, Median and Order Statistics 1 Sorting 1.1 Problem Statement You are given a sequence of n numbers < a 1, a 2,..., a n >. You need to

More information

Optimal Parallel Randomized Renaming

Optimal Parallel Randomized Renaming Optimal Parallel Randomized Renaming Martin Farach S. Muthukrishnan September 11, 1995 Abstract We consider the Renaming Problem, a basic processing step in string algorithms, for which we give a simultaneously

More information

I/O-Efficient Data Structures for Colored Range and Prefix Reporting

I/O-Efficient Data Structures for Colored Range and Prefix Reporting I/O-Efficient Data Structures for Colored Range and Prefix Reporting Kasper Green Larsen MADALGO Aarhus University, Denmark larsen@cs.au.dk Rasmus Pagh IT University of Copenhagen Copenhagen, Denmark pagh@itu.dk

More information

3 Competitive Dynamic BSTs (January 31 and February 2)

3 Competitive Dynamic BSTs (January 31 and February 2) 3 Competitive Dynamic BSTs (January 31 and February ) In their original paper on splay trees [3], Danny Sleator and Bob Tarjan conjectured that the cost of sequence of searches in a splay tree is within

More information

Cache-Oblivious Algorithms and Data Structures

Cache-Oblivious Algorithms and Data Structures Cache-Oblivious Algorithms and Data Structures Erik D. Demaine MIT Laboratory for Computer Science, 200 Technology Square, Cambridge, MA 02139, USA, edemaine@mit.edu Abstract. A recent direction in the

More information

Cache Oblivious Matrix Transposition: Simulation and Experiment

Cache Oblivious Matrix Transposition: Simulation and Experiment Cache Oblivious Matrix Transposition: Simulation and Experiment Dimitrios Tsifakis, Alistair P. Rendell, and Peter E. Strazdins Department of Computer Science, Australian National University Canberra ACT0200,

More information

Multiway Blockwise In-place Merging

Multiway Blockwise In-place Merging Multiway Blockwise In-place Merging Viliam Geffert and Jozef Gajdoš Institute of Computer Science, P.J.Šafárik University, Faculty of Science Jesenná 5, 041 54 Košice, Slovak Republic viliam.geffert@upjs.sk,

More information

Planar Point Location

Planar Point Location C.S. 252 Prof. Roberto Tamassia Computational Geometry Sem. II, 1992 1993 Lecture 04 Date: February 15, 1993 Scribe: John Bazik Planar Point Location 1 Introduction In range searching, a set of values,

More information

Cache-Oblivious Shortest Paths in Graphs Using Buffer Heap

Cache-Oblivious Shortest Paths in Graphs Using Buffer Heap Cache-Oblivious Shortest Paths in Graphs Using uffer Heap Rezaul Alam Chowdhury The University of Texas at Austin Department of Computer Sciences Austin, TX 78712 shaikat@cs.utexas.edu ijaya Ramachandran

More information

Lecture 7 February 26, 2010

Lecture 7 February 26, 2010 6.85: Advanced Data Structures Spring Prof. Andre Schulz Lecture 7 February 6, Scribe: Mark Chen Overview In this lecture, we consider the string matching problem - finding all places in a text where some

More information

Priority Queues Resilient to Memory Faults

Priority Queues Resilient to Memory Faults Priority Queues Resilient to Memory Faults Allan Grønlund Jørgensen 1,, Gabriel Moruz 1, and Thomas Mølhave 1, BRICS, MADALGO, Department of Computer Science, University of Aarhus, Denmark. E-mail: {jallan,gabi,thomasm}@daimi.au.dk

More information

Treaps. 1 Binary Search Trees (BSTs) CSE341T/CSE549T 11/05/2014. Lecture 19

Treaps. 1 Binary Search Trees (BSTs) CSE341T/CSE549T 11/05/2014. Lecture 19 CSE34T/CSE549T /05/04 Lecture 9 Treaps Binary Search Trees (BSTs) Search trees are tree-based data structures that can be used to store and search for items that satisfy a total order. There are many types

More information

Heap-on-Top Priority Queues. March Abstract. We introduce the heap-on-top (hot) priority queue data structure that combines the

Heap-on-Top Priority Queues. March Abstract. We introduce the heap-on-top (hot) priority queue data structure that combines the Heap-on-Top Priority Queues Boris V. Cherkassky Central Economics and Mathematics Institute Krasikova St. 32 117418, Moscow, Russia cher@cemi.msk.su Andrew V. Goldberg NEC Research Institute 4 Independence

More information

B-Trees and External Memory

B-Trees and External Memory Presentation for use with the textbook, Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, 2015 B-Trees and External Memory 1 (2, 4) Trees: Generalization of BSTs Each internal

More information

W4231: Analysis of Algorithms

W4231: Analysis of Algorithms W4231: Analysis of Algorithms From Binomial Heaps to Fibonacci Heaps Fibonacci Heaps 10/7/1999 We first prove that in Binomial Heaps insert and find-min take amortized O(1) time, while still having insert,

More information

Lecture 3: Art Gallery Problems and Polygon Triangulation

Lecture 3: Art Gallery Problems and Polygon Triangulation EECS 396/496: Computational Geometry Fall 2017 Lecture 3: Art Gallery Problems and Polygon Triangulation Lecturer: Huck Bennett In this lecture, we study the problem of guarding an art gallery (specified

More information

Theoretical Computer Science

Theoretical Computer Science Theoretical Computer Science 412 (2011) 3579 3588 Contents lists available at ScienceDirect Theoretical Computer Science journal homepage: www.elsevier.com/locate/tcs Cache-oblivious index for approximate

More information

Worst-Case Ecient External-Memory Priority Queues. October 2, Abstract

Worst-Case Ecient External-Memory Priority Queues. October 2, Abstract Worst-Case Ecient External-Memory Priority Queues Gerth Stlting Brodal Max-Planck-Institut fur Informatik Im Stadtwald D-66123 Saarbrucken, Germany E-mail: brodal@mpi-sb.mpg.de Jyrki Katajainen y University

More information

Cache-Oblivious Traversals of an Array s Pairs

Cache-Oblivious Traversals of an Array s Pairs Cache-Oblivious Traversals of an Array s Pairs Tobias Johnson May 7, 2007 Abstract Cache-obliviousness is a concept first introduced by Frigo et al. in [1]. We follow their model and develop a cache-oblivious

More information

Multi-Way Search Trees

Multi-Way Search Trees Multi-Way Search Trees Manolis Koubarakis 1 Multi-Way Search Trees Multi-way trees are trees such that each internal node can have many children. Let us assume that the entries we store in a search tree

More information

Poketree: A Dynamically Competitive Data Structure with Good Worst-Case Performance

Poketree: A Dynamically Competitive Data Structure with Good Worst-Case Performance Poketree: A Dynamically Competitive Data Structure with Good Worst-Case Performance Jussi Kujala and Tapio Elomaa Institute of Software Systems Tampere University of Technology P.O. Box 553, FI-33101 Tampere,

More information

CSci 231 Final Review

CSci 231 Final Review CSci 231 Final Review Here is a list of topics for the final. Generally you are responsible for anything discussed in class (except topics that appear italicized), and anything appearing on the homeworks.

More information

Binary Heaps in Dynamic Arrays

Binary Heaps in Dynamic Arrays Yufei Tao ITEE University of Queensland We have already learned that the binary heap serves as an efficient implementation of a priority queue. Our previous discussion was based on pointers (for getting

More information

CSCI2100B Data Structures Heaps

CSCI2100B Data Structures Heaps CSCI2100B Data Structures Heaps Irwin King king@cse.cuhk.edu.hk http://www.cse.cuhk.edu.hk/~king Department of Computer Science & Engineering The Chinese University of Hong Kong Introduction In some applications,

More information

Programming II (CS300)

Programming II (CS300) 1 Programming II (CS300) Chapter 12: Sorting Algorithms MOUNA KACEM mouna@cs.wisc.edu Spring 2018 Outline 2 Last week Implementation of the three tree depth-traversal algorithms Implementation of the BinarySearchTree

More information

Soft Heaps And Minimum Spanning Trees

Soft Heaps And Minimum Spanning Trees Soft And George Mason University ibanerje@gmu.edu October 27, 2016 GMU October 27, 2016 1 / 34 A (min)-heap is a data structure which stores a set of keys (with an underlying total order) on which following

More information

3. Priority Queues. ADT Stack : LIFO. ADT Queue : FIFO. ADT Priority Queue : pick the element with the lowest (or highest) priority.

3. Priority Queues. ADT Stack : LIFO. ADT Queue : FIFO. ADT Priority Queue : pick the element with the lowest (or highest) priority. 3. Priority Queues 3. Priority Queues ADT Stack : LIFO. ADT Queue : FIFO. ADT Priority Queue : pick the element with the lowest (or highest) priority. Malek Mouhoub, CS340 Winter 2007 1 3. Priority Queues

More information

V Advanced Data Structures

V Advanced Data Structures V Advanced Data Structures B-Trees Fibonacci Heaps 18 B-Trees B-trees are similar to RBTs, but they are better at minimizing disk I/O operations Many database systems use B-trees, or variants of them,

More information

B-Trees and External Memory

B-Trees and External Memory Presentation for use with the textbook, Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, 2015 and External Memory 1 1 (2, 4) Trees: Generalization of BSTs Each internal node

More information

Michael A. Bender. Stony Brook and Tokutek R, Inc

Michael A. Bender. Stony Brook and Tokutek R, Inc Michael A. Bender Stony Brook and Tokutek R, Inc Fall 1990. I m an undergraduate. Insertion-Sort Lecture. Fall 1990. I m an undergraduate. Insertion-Sort Lecture. Fall 1990. I m an undergraduate. Insertion-Sort

More information

4. A Comparison of Cache Aware and Cache Oblivious Static Search Trees Using Program Instrumentation

4. A Comparison of Cache Aware and Cache Oblivious Static Search Trees Using Program Instrumentation 4. A Comparison of Cache Aware and Cache Oblivious Static Search Trees Using Program Instrumentation Richard E. Ladner, Ray Fortna, and Bao-Hoang Nguyen Department of Computer Science & Engineering Universityof

More information

9. Heap : Priority Queue

9. Heap : Priority Queue 9. Heap : Priority Queue Where We Are? Array Linked list Stack Queue Tree Binary Tree Heap Binary Search Tree Priority Queue Queue Queue operation is based on the order of arrivals of elements FIFO(First-In

More information

Introduction. for large input, even access time may be prohibitive we need data structures that exhibit times closer to O(log N) binary search tree

Introduction. for large input, even access time may be prohibitive we need data structures that exhibit times closer to O(log N) binary search tree Chapter 4 Trees 2 Introduction for large input, even access time may be prohibitive we need data structures that exhibit running times closer to O(log N) binary search tree 3 Terminology recursive definition

More information

A NEW SORTING ALGORITHM FOR ACCELERATING JOIN-BASED QUERIES

A NEW SORTING ALGORITHM FOR ACCELERATING JOIN-BASED QUERIES Mathematical and Computational Applications, Vol. 15, No. 2, pp. 208-217, 2010. Association for Scientific Research A NEW SORTING ALGORITHM FOR ACCELERATING JOIN-BASED QUERIES Hassan I. Mathkour Department

More information

Cache-Oblivious Algorithms

Cache-Oblivious Algorithms Cache-Oblivious Algorithms Matteo Frigo, Charles Leiserson, Harald Prokop, Sridhar Ramchandran Slides Written and Presented by William Kuszmaul THE DISK ACCESS MODEL Three Parameters: B M P Block Size

More information

Module 4: Tree-Structured Indexing

Module 4: Tree-Structured Indexing Module 4: Tree-Structured Indexing Module Outline 4.1 B + trees 4.2 Structure of B + trees 4.3 Operations on B + trees 4.4 Extensions 4.5 Generalized Access Path 4.6 ORACLE Clusters Web Forms Transaction

More information

Optimal Static Range Reporting in One Dimension

Optimal Static Range Reporting in One Dimension of Optimal Static Range Reporting in One Dimension Stephen Alstrup Gerth Stølting Brodal Theis Rauhe ITU Technical Report Series 2000-3 ISSN 1600 6100 November 2000 Copyright c 2000, Stephen Alstrup Gerth

More information

Lecture 5 Using Data Structures to Improve Dijkstra s Algorithm. (AM&O Sections and Appendix A)

Lecture 5 Using Data Structures to Improve Dijkstra s Algorithm. (AM&O Sections and Appendix A) Lecture Using Data Structures to Improve Dijkstra s Algorithm (AM&O Sections 4.6 4.8 and Appendix A) Manipulating the Data in Dijkstra s Algorithm The bottleneck operation in Dijkstra s Algorithm is that

More information

Fast and Cache-Oblivious Dynamic Programming with Local Dependencies

Fast and Cache-Oblivious Dynamic Programming with Local Dependencies Fast and Cache-Oblivious Dynamic Programming with Local Dependencies Philip Bille and Morten Stöckel Technical University of Denmark, DTU Informatics, Copenhagen, Denmark Abstract. String comparison such

More information

9/29/2016. Chapter 4 Trees. Introduction. Terminology. Terminology. Terminology. Terminology

9/29/2016. Chapter 4 Trees. Introduction. Terminology. Terminology. Terminology. Terminology Introduction Chapter 4 Trees for large input, even linear access time may be prohibitive we need data structures that exhibit average running times closer to O(log N) binary search tree 2 Terminology recursive

More information

So the actual cost is 2 Handout 3: Problem Set 1 Solutions the mark counter reaches c, a cascading cut is performed and the mark counter is reset to 0

So the actual cost is 2 Handout 3: Problem Set 1 Solutions the mark counter reaches c, a cascading cut is performed and the mark counter is reset to 0 Massachusetts Institute of Technology Handout 3 6854/18415: Advanced Algorithms September 14, 1999 David Karger Problem Set 1 Solutions Problem 1 Suppose that we have a chain of n 1 nodes in a Fibonacci

More information

CACHE-OBLIVIOUS SEARCHING AND SORTING IN MULTISETS

CACHE-OBLIVIOUS SEARCHING AND SORTING IN MULTISETS CACHE-OLIVIOUS SEARCHIG AD SORTIG I MULTISETS by Arash Farzan A thesis presented to the University of Waterloo in fulfilment of the thesis requirement for the degree of Master of Mathematics in Computer

More information

08 A: Sorting III. CS1102S: Data Structures and Algorithms. Martin Henz. March 10, Generated on Tuesday 9 th March, 2010, 09:58

08 A: Sorting III. CS1102S: Data Structures and Algorithms. Martin Henz. March 10, Generated on Tuesday 9 th March, 2010, 09:58 08 A: Sorting III CS1102S: Data Structures and Algorithms Martin Henz March 10, 2010 Generated on Tuesday 9 th March, 2010, 09:58 CS1102S: Data Structures and Algorithms 08 A: Sorting III 1 1 Recap: Sorting

More information

Three Sorting Algorithms Using Priority Queues

Three Sorting Algorithms Using Priority Queues Three Sorting Algorithms Using Priority Queues Amr Elmasry Computer Science Department Alexandria University Alexandria, Egypt. elmasry@cs.rutgers.edu Abstract. We establish a lower bound of B(n) = n log

More information

Algorithm Theory, Winter Term 2015/16 Problem Set 5 - Sample Solution

Algorithm Theory, Winter Term 2015/16 Problem Set 5 - Sample Solution Albert-Ludwigs-Universität, Inst. für Informatik Prof. Dr. Fabian Kuhn M. Ahmadi, O. Saukh, A. R. Molla November, 20 Algorithm Theory, Winter Term 20/6 Problem Set - Sample Solution Exercise : Amortized

More information

Priority Queues. 1 Introduction. 2 Naïve Implementations. CSci 335 Software Design and Analysis III Chapter 6 Priority Queues. Prof.

Priority Queues. 1 Introduction. 2 Naïve Implementations. CSci 335 Software Design and Analysis III Chapter 6 Priority Queues. Prof. Priority Queues 1 Introduction Many applications require a special type of queuing in which items are pushed onto the queue by order of arrival, but removed from the queue based on some other priority

More information

arxiv: v2 [cs.ds] 9 Apr 2009

arxiv: v2 [cs.ds] 9 Apr 2009 Pairing Heaps with Costless Meld arxiv:09034130v2 [csds] 9 Apr 2009 Amr Elmasry Max-Planck Institut für Informatik Saarbrücken, Germany elmasry@mpi-infmpgde Abstract Improving the structure and analysis

More information

Data Structure. IBPS SO (IT- Officer) Exam 2017

Data Structure. IBPS SO (IT- Officer) Exam 2017 Data Structure IBPS SO (IT- Officer) Exam 2017 Data Structure: In computer science, a data structure is a way of storing and organizing data in a computer s memory so that it can be used efficiently. Data

More information

1 Static-to-Dynamic Transformations

1 Static-to-Dynamic Transformations You re older than you ve ever been and now you re even older And now you re even older And now you re even older You re older than you ve ever been and now you re even older And now you re older still

More information

Chapter 6 Heaps. Introduction. Heap Model. Heap Implementation

Chapter 6 Heaps. Introduction. Heap Model. Heap Implementation Introduction Chapter 6 Heaps some systems applications require that items be processed in specialized ways printing may not be best to place on a queue some jobs may be more small 1-page jobs should be

More information

HEAPS: IMPLEMENTING EFFICIENT PRIORITY QUEUES

HEAPS: IMPLEMENTING EFFICIENT PRIORITY QUEUES HEAPS: IMPLEMENTING EFFICIENT PRIORITY QUEUES 2 5 6 9 7 Presentation for use with the textbook Data Structures and Algorithms in Java, 6 th edition, by M. T. Goodrich, R. Tamassia, and M. H., Wiley, 2014

More information

Advanced Algorithms. Class Notes for Thursday, September 18, 2014 Bernard Moret

Advanced Algorithms. Class Notes for Thursday, September 18, 2014 Bernard Moret Advanced Algorithms Class Notes for Thursday, September 18, 2014 Bernard Moret 1 Amortized Analysis (cont d) 1.1 Side note: regarding meldable heaps When we saw how to meld two leftist trees, we did not

More information

Optimum Alphabetic Binary Trees T. C. Hu and J. D. Morgenthaler Department of Computer Science and Engineering, School of Engineering, University of C

Optimum Alphabetic Binary Trees T. C. Hu and J. D. Morgenthaler Department of Computer Science and Engineering, School of Engineering, University of C Optimum Alphabetic Binary Trees T. C. Hu and J. D. Morgenthaler Department of Computer Science and Engineering, School of Engineering, University of California, San Diego CA 92093{0114, USA Abstract. We

More information

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Priority Queues / Heaps Date: 9/27/17

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Priority Queues / Heaps Date: 9/27/17 01.433/33 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Priority Queues / Heaps Date: 9/2/1.1 Introduction In this lecture we ll talk about a useful abstraction, priority queues, which are

More information

Algorithms for dealing with massive data

Algorithms for dealing with massive data Computer Science Department Federal University of Rio Grande do Sul Porto Alegre, Brazil Outline of the talk Introduction Outline of the talk Algorithms models for dealing with massive datasets : Motivation,

More information

What is a Multi-way tree?

What is a Multi-way tree? B-Tree Motivation for studying Multi-way and B-trees A disk access is very expensive compared to a typical computer instruction (mechanical limitations) -One disk access is worth about 200,000 instructions.

More information

On the other hand, the main disadvantage of the amortized approach is that it cannot be applied in real-time programs, where the worst-case bound on t

On the other hand, the main disadvantage of the amortized approach is that it cannot be applied in real-time programs, where the worst-case bound on t Randomized Meldable Priority Queues Anna Gambin and Adam Malinowski Instytut Informatyki, Uniwersytet Warszawski, Banacha 2, Warszawa 02-097, Poland, faniag,amalg@mimuw.edu.pl Abstract. We present a practical

More information