Implementation with ruby features. Sorting, Searching and Haching. Quick Sort. Algorithm of Quick Sort

Size: px
Start display at page:

Download "Implementation with ruby features. Sorting, Searching and Haching. Quick Sort. Algorithm of Quick Sort"

Transcription

1 Implementation with ruby features Sorting, and Haching Bruno MARTI, University of ice - Sophia Antipolis mailto:bruno.martin@unice.fr hmods.html It uses the ideas of the quicksort def qsort return self if empty? select { x x < first }.qsort + select { x x==first} + select { x x > first }.qsort How can we replace the select operator from ruby? Quick Sort Algorithm of Quick Sort Invented by C.A.R. Hoare in 1960, easy to implement, a good general purpose internal sort It is a divide-and-conquer algorithm : take at random an element in the array, say v divide the array into two partitions : One contains elements smaller than v The other contains elements greater than v put the elements v at the begining of the array (say, index between 1 and m 1) and the elements v at the of the array (index between m + 1 and ) then you have found the place to put v between the two partitions (at position m) recursively call QuickSort on ([a 0,..., a m 1 ] and [a m+1,..., a 1 ]) stop when the partition is reduced to a single element For example, the random element can be the leftmost or the rightmost element, we choose the rightmost. Our QuickSort runs on an array [a left,..., a right ]: def quick!(left,right) if left < right m = self.partition(left,right) self.quick!(left, m-1) self.quick!(m+1, right)

2 Algorithm of the Partition of the Array Example: [3,5,1,2,4].qsort! Scans (index i) from the left until you find an elt v (a[i] v) Scans (index j) from the right until you find an elt v (a[j] v) Both elements are obviously out of place: swap a[i] and a[j] Continue until the scan pointers cross (j i) Exchange v (a[right]) with the element a[i] until j<=i do i+=1 until self[i]>=v #scans for i:self[i]>=v j-=1 until self[j]<=v #scans for j:self[j]<=v if i<=j self.swap!(i,j) #exchange both elements i+=1; j-=1 #modify indexes:clean recursion Best seen from /Users/bmartin/Documents/Enseignement/Mathmods/Programs with trirapide! The big picture def qsort! def lqsort(left,right) #sort from left to right if left<right v,i,j=self[right],left,right until j<=i do i+=1 until self[i]>=v #scans for i:self[i]>=v j-=1 until self[j]<=v #scans for j:self[j]<=v if i<=j self.swap!(i,j) #exchange both elements i+=1; j-=1 #modify indexes:clean recursion self.lqsort(left,j) #sort left part self.lqsort(i,right) #sort right part self.lqsort(0,self.length-1) self Quick Sort We test that neither i nor j cross the array bounds left and right Because v = self [right] you are sure that the loop on i stops at least when i = right But if v = self [right] happens to be the smallest element between left and right, the loop on j might pass the left of the array To avoid the tests, you can choose another solution Take three elements in the array: the leftmost, the rightmost and the middle one Sort them Put the smallest at the leftmost position, the greatest at the rightmost position and the middle one as v

3 Quick Sort on Average-Case Partitioning Quick Sort on worst-case partitioning Average performance of Quick Sort is about 1.38 log : very efficient algorithm with a very small constant Quick Sort is a divide-and-conquer algorithm which splits the problem in two recursive calls and combines the results Divide-and-conquer is a good method every time you can split your problem in smaller pieces and combine the results to obtain the global solution But divide-and-conquer leads to an efficient algorithm only when the problem is divided without overlap Quick Sort is very inefficient on already sorted sets: O( 2 ) Suppose a[0],..., a[ 1] sorted without equal elements At the first call v = a[ 1] The while on i continues until i = 1 and stops because a[ 1] = v : the sort does comparisons The while on j stops on j = 2 because a[ 2] < v: 1 comparison We exchange a[ 1] with itself : 1 exchange We call QuickSort on a[0],..., a[ 2] and on a[ 2],..., a[ 1] which imediately stops So ( + 1) + +( 1) =( + 3)/2 QuickSort is in O( 2 ) on sorted sets C : average number of comparisons for sorting elements: C = k=1 (C k 1 + C k ) + 1 comparisons during the two inner whiles 1+2 (2 when i and j cross) Plus the average number of comparisons on the two sub-arrays ((C 0 + C 1 )+(C 1 + C 2 )+... +(C 1 + C 0 ))/ By symmetry : C = k=1 C k 1 substract C ( 1)C 1 C =( + 1)C 1 +2 divide both side by ( + 1) to obtain the recurrence : C +1 = C = C =... = C k Approximation : C +1 2 k=1 1 k x dx 2 ln C 2ln 2ln(2)Log() 1.38Log k=4 Intuition for the performance of quick sort Quicksort running time deps on whether the partitioning is balanced The worst-case partitioning occurs when the partitioning produces one region with 1 element and one with 1 elements: O( 2 ) The best-case partitioning occurs when the partitioning produces two regions with /2 elements (C = +2C /2 ): O( log ) worst-case ^ best-case ^ / \ / \ 1-1 /2 /2 / \ / \ / \ log 1-2 /4 /4 /4 /4 / \ / \ v 1 1 v

4 Lower Bound for Sorting Overview Is sorting an array of size possible in log operations? If you use element comparisons: it is impossible You need to model your computation problem: You express each sort by a decision tree where each internal node represents the comparison between two elements The left child correspond to the negative answer and the right child to the positive one Each leaf represents a given permutation 1 2 Representing the decision tree model Introduction to Set to sort: {a1, a2, a3} the corresponding decision tree is : a1 > a2 / \ a2 > a3 a1 > a3 / \ / \ (a1,a2,a3) a1 > a3 (a2,a1,a3) a2 > a3 / \ / \ (a1,a3,a2) (a3,a1,a2) (a3,a2,a1) (a3,a2,a1) The decision tree to sort elements has! leaves (all possible permutations) A binary tree with! leaves has a height order of log(!) which is approximately log (Stirling) log is a lower bound for sorting : fundamental operation in many tasks: retrieving a particular information among a large amount of stored data The stored data can be viewed as a set Information divided into records with field key used for searching Goal of : find the records whose key matches a given searched key Dictionaries and symbol tables are two examples of data structures needed for searching

5 Operations of Sequential in a Sorted List is in O() The time complexity often deps on the structure given to the set of records (eg lists, sets, arrays, trees,...) So, when programming a algorithm on a structure, one often needs to provide operations like Insertion, Deletion and sometimes Sorting the set of records In any case, the time complexity of the searching algorithm might be sensitive to operations like comparison of keys, insertion of one record in the set, shift of records, exchange of records,... Sequential searching in a sorted list approximately uses /2 for both a successful and an unsuccessful search The (average) complexity of the successful search in sorted lists equals the successful search on array in the average case For unsuccessful: The search can be ed by each of the elements of the list We do 1 comparison if the searched key is less than the first element,..., + 1 comparison if the key is greater than the last one (the sentinel) ( ( + 1))/ =( + 1)( + 2)/2 Sequential in an Array is O() An Elementary Algorithm : the Binary Search Sequential in an array uses + 1 comparisons for an unsuccessful search in the best, average and worst case ( + 1)/2 comparisons for a successful search on the average 1 Suppose that the records have the same probability to be found We do 1 comparison with the first one,. to find the last one on the average: ( )/ = ( + 1)/2 When the set of records gets large and the records are ordered to reduce the searching time, use a divide-and-conquer strategy: Divide the set into two parts Determine in which part the key might belong to Repeat the search on this part of the set 1 average=mean= sum of all the entries number of entries

6 Application to numerical analysis For finding an approximate of the zeroes of a cont. function by the Theorem (Intermediate value theorem) If the function f (x) =y is continuous on [a, b] and u is a number st f (a) < u < f (b), then there is a c [a, b] s.t. f (c) =u. if one can evaluate the sign of f ((a + b)/2); Let f be strictly increasing on [a, b] withf (a) < 0 < f (b) The binary search allows to find y st f (y) = 0: 1 start with the pair (a, b) 2 evaluate v = f ((a + b)/2) 3 if v < 0 replace a by v otherwise replace b by v 4 iterate on the new pair until the diff. between the values is less than an arbitrary given precision Performance of Binary Search Proof 1 : Proof 2 : Consider the tree of the recursive calls of the Search At each call the array is split into two halves The tree is a full binary tree The number of comparisons equals the tree height : log 2 The number of comparisons at step equals the number of comparisons in one subarray plus 1 because you compare with the root Solve the recurrence C = C /2 +1, for 2 with C 1 =0 log C = C /2 +1 =2 n C 2 n = C 2 n C 2 n = n = log Performance of Binary Search Order of magnitude on the average case : Binary Search uses approximately log comparisons for both (un)successful search in best, average and worst case Maximal number of comparisons when the search is unsuccessful A successful sequential search in a set of elements takes 5000 comparisons A successful binary search in the same set takes 14 comparisons BUT Inserting an element : In an array takes 1 operation In a sorted array takes operations : to find the place and shift right the other elements

7 Elementary Algorithm: Interpolation Outline Dictionary search: if the word begins by B you look near the beginning and if the word begins by T you turn a lot of pages. Suppose you search the key k, in the binary search you cut the array in the middle middle = left + 1 (right left) 2 In the interpolation you takes the values of the keys into account by replacing 1/2 by a better progression position = left + k A[left].key (right left) A[right].key A[left].key 1 2 Performance of the Interpolation Search The interpolation search uses approximately log(log ) comparisons for both (un)successful search in the array But Interpolation search heavily deps on the fact that the keys are well distributed over the interval The method requires some computation; for small sets the log of binary search is close to log(log ) So interpolation search should be used for large sets in applications where comparisons are particularly expensive or for external methods where access costs are high is a completely different method of searching The idea is to access directly the record in a table using its key - the same way an index accesses an entry in an array - We use a hash function that computes a table index from the key Basic operations: insert, remove, search

8 Why does M have to be prime? The steps in hashing: 1 compute a hash function which maps keys in table addresses Since there are more records () than indexes (M) in the table, two or more keys may hash to the same table address : it s the collision problem 2 the collision resolution process Good hash functions should uniformly distribute entries in the table Since, if the function uniformly distributes the keys, the complexity of searching is approx. divided by the table s size An example of hash function is hash(key)= key[0] (2 k ) 0 + key[1] (2 k ) key[n] (2 k ) n mod M Suppose you choose M =2 k then XXX mod M is unaffected by adding to XXX multiples of 2 k hash(key)=key[0] : hash only deps on the 1 st char of key The simplest way to ensure that the hash function takes all the characters of a key into account is to take M prime Transform Keys into Integers in [[0, M 1]] How to Handle the Collision Process If your key is already a large integer choose M to be a prime and compute key If your key is an uppercase character string Example mod M encode each char in a 5-bit code (5 bits (2 5 ) are required to encode 26 items): each letter is encoded by the binary value of its rank in the alphabet compute the modulo of the corresponding decimal value ABC (2 5 ) 2 +2 (2 5 ) 1 +3 (2 5 ) 0 = mod M index table We have an array of size M - called the hash table - and a hash function which gives for any key a possible entry in this array Problem: decide what to do when 2 keys hash to the same address A first simple method is to build for each table entry a linked list of records whose keys hash to the same entry Colliding records are chained together we call it separate chaining At the initialization, the hash table will be an array of M pointers to empty linked lists

9 Example Performances Good hash functions uniformly distribute entries over the table expected values in O(α) (α = M table s filling rate): Unsuccessful: 1 M M (1 + L i) since the element L i Q (M, ) =α +1 since L i = Successful: searching for an element in the table equals the cost of inserting it when only the inserted elements before it were already in the table: Q + (M, ) = 1 1 Q (M, i) = i M = 1+α 2 1 2M i=0 The interest of hashing is that it is efficient and easy to program i=0 a record in a Hash Table with linked lists Main operation on a HashTable: search a record with its key: compute the hash value of the key : hash(key)=i access to the linked list at position i : HashTable[i] if there s more than your record in the list you have collisions searching becomes a search in a list: iterate on each record comparing the keys unsuccessful search: you iterate down the list without finding your record Operations of insertion and removal of records in a Hash Table become linked list operations Alternative proof for successful search x i is the i th element inserted into the table and k i = key[x i ] X ij = 1{h(k i )=h(k j )} for all i, j (indicator Rand.Var.) simple uniform hashing: Pr{h(k i )=h(k j )} =1/M E[X ij ]=1/M expected number of elements examined in a successful search: E 1 1+ X ij (1) i=1 j=i+1 j=i+1 X ij= of elements inserted after x i into the same slot as x i. (1) = 1 i=1 1+ j=i+1 E[X ij] = 1 i=1 1+ j=i+1 1 M = M i=1 ( i) =1+ M i=1 i=1 i = 1+ 1 M 2 (+1) 2 =1+ 1 2M

10 Expected cost interpretation and Inserting in Linear Probing if = O(M), then α = /M = O(M)/M = O(1) searching takes constant time on the average insertion is O(1) in the worst case deletion takes O(1) worst-case time for doubly linked lists hence, all dictionary operations take O(1) time on average with hash tables with chaining If the place HashTable[hash(key)] is already busy If the keys match, the search is successful Else there is a collision You search at the next place i +1 If the place is free, the search is unsuccessful and you have found a place to insert your record Else if the keys match, the search is successful If the keys differ try the next position i +2 But be careful the position after i is i +1mod M And check that the table is not full otherwise the iteration won t terminate Another structure for Hash Table: Linear Probing Example When the number of elements can be estimated in advance You can avoid using any linked list You store records in a table of size M > Empty places in the table help you for collision resolution It is called the linear probing

11 Problem with Linear Probing Eliminating the Clustering Problem Suppose you like to perform the operation of suppression To suppress an element in the Hash Table, you search it, you remove it from the array and the place is free again. Is it so simple? Suppose key1 and key2 (different) hash to the same address i you insert key1 first at position i you try to insert key2 at position i, you find it busy, and you finally insert it at position i +1 now you suppress key1. The place i becomes free you search key2: it hashes at a free position i: its search is unsuccessful but key2 is in the table A place may have three status: free, busy and suppress Instead of examining each successive entry, we use a second hash function to compute a fixed increment to use for the sequence (instead of using 1 in linear probing) Deping on the choice of the second hash function, the program may not work : obviously 0 leads to an infinite loop Performances in Hash Table with linear probing Conclusion on This hashing works because it guarantees that when you search for a particular key you look at every key that hashes to the same table address In linear probing when the table begins to fill up, you also look to other keys: 2 different collision sets may be stuck together: clustering problem Linear probing is very slow when tables are almost full because of the clustering problem And when the table is full you cannot continue to use it is a classical problem in CS: various algorithms have been studied and are widely used There are many empirical and analytic results that make utility of evident for a broad variety of applications is prefered to binary tree searches for many applications because it is simple to implement and can provide very fast constant searching times when space is available for a large enough table

12 in Ruby zip=hash.new zip={"06000" => "ice", "06100" => "ice", "06110" => "Le Cannet", "06130" => "Grasse", "06140" => "Coursegoules", "06140" => "Tourrettes sur Loup", "06140" => "Vence", "06190" => "Rocquebrune Cap Martin", "06200" => "ice", "06230" => "Saint Jean Cap Ferrat", "06230" => "Villefranche sur Mer"} zip["06300"]="ice" # adds a new entry zip.keys=>["06140", "06130", "06230", "06110", "06000", "06100", "06200", "06300", "06190"] zip.values=>["vence", "Grasse", "Villefranche sur Mer", "Le Cannet", "ice", "ice", "ice", "ice", "Rocquebrune Cap Martin"] zip.select { key,val val="ice"}=>[["06000", "ice"], ["06100", "ice"], ["06200", "ice"], ["06300", "ice"]] zip.index "ice" => "06000" zip.each { k,v puts "#{k}/#{v}"}=> 06140/Vence 06130/Grasse 06230/Villefranche sur Mer 06110/Le Cannet 06000/ice 06100/ice 06200/ice 06300/ice 06190/Rocquebrune Cap Martin

Hashing. 7- Hashing. Hashing. Transform Keys into Integers in [[0, M 1]] The steps in hashing:

Hashing. 7- Hashing. Hashing. Transform Keys into Integers in [[0, M 1]] The steps in hashing: Hashing 7- Hashing Bruno MARTI, University of ice - Sophia Antipolis mailto:bruno.martin@unice.fr http://www.i3s.unice.fr/~bmartin/mathmods.html The steps in hashing: 1 compute a hash function which maps

More information

Table ADT and Sorting. Algorithm topics continuing (or reviewing?) CS 24 curriculum

Table ADT and Sorting. Algorithm topics continuing (or reviewing?) CS 24 curriculum Table ADT and Sorting Algorithm topics continuing (or reviewing?) CS 24 curriculum A table ADT (a.k.a. Dictionary, Map) Table public interface: // Put information in the table, and a unique key to identify

More information

Module 2: Classical Algorithm Design Techniques

Module 2: Classical Algorithm Design Techniques Module 2: Classical Algorithm Design Techniques Dr. Natarajan Meghanathan Associate Professor of Computer Science Jackson State University Jackson, MS 39217 E-mail: natarajan.meghanathan@jsums.edu Module

More information

Worst-case running time for RANDOMIZED-SELECT

Worst-case running time for RANDOMIZED-SELECT Worst-case running time for RANDOMIZED-SELECT is ), even to nd the minimum The algorithm has a linear expected running time, though, and because it is randomized, no particular input elicits the worst-case

More information

CS301 - Data Structures Glossary By

CS301 - Data Structures Glossary By CS301 - Data Structures Glossary By Abstract Data Type : A set of data values and associated operations that are precisely specified independent of any particular implementation. Also known as ADT Algorithm

More information

9/24/ Hash functions

9/24/ Hash functions 11.3 Hash functions A good hash function satis es (approximately) the assumption of SUH: each key is equally likely to hash to any of the slots, independently of the other keys We typically have no way

More information

DATA STRUCTURES/UNIT 3

DATA STRUCTURES/UNIT 3 UNIT III SORTING AND SEARCHING 9 General Background Exchange sorts Selection and Tree Sorting Insertion Sorts Merge and Radix Sorts Basic Search Techniques Tree Searching General Search Trees- Hashing.

More information

CSE 214 Computer Science II Searching

CSE 214 Computer Science II Searching CSE 214 Computer Science II Searching Fall 2017 Stony Brook University Instructor: Shebuti Rayana shebuti.rayana@stonybrook.edu http://www3.cs.stonybrook.edu/~cse214/sec02/ Introduction Searching in a

More information

COSC160: Data Structures Hashing Structures. Jeremy Bolton, PhD Assistant Teaching Professor

COSC160: Data Structures Hashing Structures. Jeremy Bolton, PhD Assistant Teaching Professor COSC160: Data Structures Hashing Structures Jeremy Bolton, PhD Assistant Teaching Professor Outline I. Hashing Structures I. Motivation and Review II. Hash Functions III. HashTables I. Implementations

More information

1. [1 pt] What is the solution to the recurrence T(n) = 2T(n-1) + 1, T(1) = 1

1. [1 pt] What is the solution to the recurrence T(n) = 2T(n-1) + 1, T(1) = 1 Asymptotics, Recurrence and Basic Algorithms 1. [1 pt] What is the solution to the recurrence T(n) = 2T(n-1) + 1, T(1) = 1 2. O(n) 2. [1 pt] What is the solution to the recurrence T(n) = T(n/2) + n, T(1)

More information

Hash Table and Hashing

Hash Table and Hashing Hash Table and Hashing The tree structures discussed so far assume that we can only work with the input keys by comparing them. No other operation is considered. In practice, it is often true that an input

More information

Solutions to Exam Data structures (X and NV)

Solutions to Exam Data structures (X and NV) Solutions to Exam Data structures X and NV 2005102. 1. a Insert the keys 9, 6, 2,, 97, 1 into a binary search tree BST. Draw the final tree. See Figure 1. b Add NIL nodes to the tree of 1a and color it

More information

CSE100. Advanced Data Structures. Lecture 21. (Based on Paul Kube course materials)

CSE100. Advanced Data Structures. Lecture 21. (Based on Paul Kube course materials) CSE100 Advanced Data Structures Lecture 21 (Based on Paul Kube course materials) CSE 100 Collision resolution strategies: linear probing, double hashing, random hashing, separate chaining Hash table cost

More information

Run Times. Efficiency Issues. Run Times cont d. More on O( ) notation

Run Times. Efficiency Issues. Run Times cont d. More on O( ) notation Comp2711 S1 2006 Correctness Oheads 1 Efficiency Issues Comp2711 S1 2006 Correctness Oheads 2 Run Times An implementation may be correct with respect to the Specification Pre- and Post-condition, but nevertheless

More information

CPSC 311 Lecture Notes. Sorting and Order Statistics (Chapters 6-9)

CPSC 311 Lecture Notes. Sorting and Order Statistics (Chapters 6-9) CPSC 311 Lecture Notes Sorting and Order Statistics (Chapters 6-9) Acknowledgement: These notes are compiled by Nancy Amato at Texas A&M University. Parts of these course notes are based on notes from

More information

IS 709/809: Computational Methods in IS Research. Algorithm Analysis (Sorting)

IS 709/809: Computational Methods in IS Research. Algorithm Analysis (Sorting) IS 709/809: Computational Methods in IS Research Algorithm Analysis (Sorting) Nirmalya Roy Department of Information Systems University of Maryland Baltimore County www.umbc.edu Sorting Problem Given an

More information

DIVIDE & CONQUER. Problem of size n. Solution to sub problem 1

DIVIDE & CONQUER. Problem of size n. Solution to sub problem 1 DIVIDE & CONQUER Definition: Divide & conquer is a general algorithm design strategy with a general plan as follows: 1. DIVIDE: A problem s instance is divided into several smaller instances of the same

More information

Fast Lookup: Hash tables

Fast Lookup: Hash tables CSE 100: HASHING Operations: Find (key based look up) Insert Delete Fast Lookup: Hash tables Consider the 2-sum problem: Given an unsorted array of N integers, find all pairs of elements that sum to a

More information

III Data Structures. Dynamic sets

III Data Structures. Dynamic sets III Data Structures Elementary Data Structures Hash Tables Binary Search Trees Red-Black Trees Dynamic sets Sets are fundamental to computer science Algorithms may require several different types of operations

More information

Hashing. Hashing Procedures

Hashing. Hashing Procedures Hashing Hashing Procedures Let us denote the set of all possible key values (i.e., the universe of keys) used in a dictionary application by U. Suppose an application requires a dictionary in which elements

More information

AAL 217: DATA STRUCTURES

AAL 217: DATA STRUCTURES Chapter # 4: Hashing AAL 217: DATA STRUCTURES The implementation of hash tables is frequently called hashing. Hashing is a technique used for performing insertions, deletions, and finds in constant average

More information

Module 5: Hashing. CS Data Structures and Data Management. Reza Dorrigiv, Daniel Roche. School of Computer Science, University of Waterloo

Module 5: Hashing. CS Data Structures and Data Management. Reza Dorrigiv, Daniel Roche. School of Computer Science, University of Waterloo Module 5: Hashing CS 240 - Data Structures and Data Management Reza Dorrigiv, Daniel Roche School of Computer Science, University of Waterloo Winter 2010 Reza Dorrigiv, Daniel Roche (CS, UW) CS240 - Module

More information

Lecture 19 Sorting Goodrich, Tamassia

Lecture 19 Sorting Goodrich, Tamassia Lecture 19 Sorting 7 2 9 4 2 4 7 9 7 2 2 7 9 4 4 9 7 7 2 2 9 9 4 4 2004 Goodrich, Tamassia Outline Review 3 simple sorting algorithms: 1. selection Sort (in previous course) 2. insertion Sort (in previous

More information

5. Hashing. 5.1 General Idea. 5.2 Hash Function. 5.3 Separate Chaining. 5.4 Open Addressing. 5.5 Rehashing. 5.6 Extendible Hashing. 5.

5. Hashing. 5.1 General Idea. 5.2 Hash Function. 5.3 Separate Chaining. 5.4 Open Addressing. 5.5 Rehashing. 5.6 Extendible Hashing. 5. 5. Hashing 5.1 General Idea 5.2 Hash Function 5.3 Separate Chaining 5.4 Open Addressing 5.5 Rehashing 5.6 Extendible Hashing Malek Mouhoub, CS340 Fall 2004 1 5. Hashing Sequential access : O(n). Binary

More information

CS 350 Algorithms and Complexity

CS 350 Algorithms and Complexity CS 350 Algorithms and Complexity Winter 2019 Lecture 12: Space & Time Tradeoffs. Part 2: Hashing & B-Trees Andrew P. Black Department of Computer Science Portland State University Space-for-time tradeoffs

More information

Presentation for use with the textbook, Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, Merge Sort & Quick Sort

Presentation for use with the textbook, Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, Merge Sort & Quick Sort Presentation for use with the textbook, Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, 2015 Merge Sort & Quick Sort 1 Divide-and-Conquer Divide-and conquer is a general algorithm

More information

We assume uniform hashing (UH):

We assume uniform hashing (UH): We assume uniform hashing (UH): the probe sequence of each key is equally likely to be any of the! permutations of 0,1,, 1 UH generalizes the notion of SUH that produces not just a single number, but a

More information

Hash Tables. Reading: Cormen et al, Sections 11.1 and 11.2

Hash Tables. Reading: Cormen et al, Sections 11.1 and 11.2 COMP3600/6466 Algorithms 2018 Lecture 10 1 Hash Tables Reading: Cormen et al, Sections 11.1 and 11.2 Many applications require a dynamic set that supports only the dictionary operations Insert, Search

More information

CS2223: Algorithms Sorting Algorithms, Heap Sort, Linear-time sort, Median and Order Statistics

CS2223: Algorithms Sorting Algorithms, Heap Sort, Linear-time sort, Median and Order Statistics CS2223: Algorithms Sorting Algorithms, Heap Sort, Linear-time sort, Median and Order Statistics 1 Sorting 1.1 Problem Statement You are given a sequence of n numbers < a 1, a 2,..., a n >. You need to

More information

Hashing Techniques. Material based on slides by George Bebis

Hashing Techniques. Material based on slides by George Bebis Hashing Techniques Material based on slides by George Bebis https://www.cse.unr.edu/~bebis/cs477/lect/hashing.ppt The Search Problem Find items with keys matching a given search key Given an array A, containing

More information

Multiple-choice (35 pt.)

Multiple-choice (35 pt.) CS 161 Practice Midterm I Summer 2018 Released: 7/21/18 Multiple-choice (35 pt.) 1. (2 pt.) Which of the following asymptotic bounds describe the function f(n) = n 3? The bounds do not necessarily need

More information

Quiz 1 Practice Problems

Quiz 1 Practice Problems Introduction to Algorithms: 6.006 Massachusetts Institute of Technology March 7, 2008 Professors Srini Devadas and Erik Demaine Handout 6 1 Asymptotic Notation Quiz 1 Practice Problems Decide whether these

More information

Hash Tables. Hashing Probing Separate Chaining Hash Function

Hash Tables. Hashing Probing Separate Chaining Hash Function Hash Tables Hashing Probing Separate Chaining Hash Function Introduction In Chapter 4 we saw: linear search O( n ) binary search O( log n ) Can we improve the search operation to achieve better than O(

More information

II (Sorting and) Order Statistics

II (Sorting and) Order Statistics II (Sorting and) Order Statistics Heapsort Quicksort Sorting in Linear Time Medians and Order Statistics 8 Sorting in Linear Time The sorting algorithms introduced thus far are comparison sorts Any comparison

More information

Dictionary. Dictionary. stores key-value pairs. Find(k) Insert(k, v) Delete(k) List O(n) O(1) O(n) Sorted Array O(log n) O(n) O(n)

Dictionary. Dictionary. stores key-value pairs. Find(k) Insert(k, v) Delete(k) List O(n) O(1) O(n) Sorted Array O(log n) O(n) O(n) Hash-Tables Introduction Dictionary Dictionary stores key-value pairs Find(k) Insert(k, v) Delete(k) List O(n) O(1) O(n) Sorted Array O(log n) O(n) O(n) Balanced BST O(log n) O(log n) O(log n) Dictionary

More information

Lecture 16. Reading: Weiss Ch. 5 CSE 100, UCSD: LEC 16. Page 1 of 40

Lecture 16. Reading: Weiss Ch. 5 CSE 100, UCSD: LEC 16. Page 1 of 40 Lecture 16 Hashing Hash table and hash function design Hash functions for integers and strings Collision resolution strategies: linear probing, double hashing, random hashing, separate chaining Hash table

More information

We can use a max-heap to sort data.

We can use a max-heap to sort data. Sorting 7B N log N Sorts 1 Heap Sort We can use a max-heap to sort data. Convert an array to a max-heap. Remove the root from the heap and store it in its proper position in the same array. Repeat until

More information

Lecture 6: Hashing Steven Skiena

Lecture 6: Hashing Steven Skiena Lecture 6: Hashing Steven Skiena Department of Computer Science State University of New York Stony Brook, NY 11794 4400 http://www.cs.stonybrook.edu/ skiena Dictionary / Dynamic Set Operations Perhaps

More information

Data Structures and Algorithms 2018

Data Structures and Algorithms 2018 Question 1 (12 marks) Data Structures and Algorithms 2018 Assignment 4 25% of Continuous Assessment Mark Deadline : 5pm Monday 12 th March, via Canvas Sort the array [5, 3, 4, 6, 8, 4, 1, 9, 7, 1, 2] using

More information

Sorting. There exist sorting algorithms which have shown to be more efficient in practice.

Sorting. There exist sorting algorithms which have shown to be more efficient in practice. Sorting Next to storing and retrieving data, sorting of data is one of the more common algorithmic tasks, with many different ways to perform it. Whenever we perform a web search and/or view statistics

More information

Data Structures and Algorithms. Roberto Sebastiani

Data Structures and Algorithms. Roberto Sebastiani Data Structures and Algorithms Roberto Sebastiani roberto.sebastiani@disi.unitn.it http://www.disi.unitn.it/~rseba - Week 07 - B.S. In Applied Computer Science Free University of Bozen/Bolzano academic

More information

Lecturers: Sanjam Garg and Prasad Raghavendra March 20, Midterm 2 Solutions

Lecturers: Sanjam Garg and Prasad Raghavendra March 20, Midterm 2 Solutions U.C. Berkeley CS70 : Algorithms Midterm 2 Solutions Lecturers: Sanjam Garg and Prasad aghavra March 20, 207 Midterm 2 Solutions. (0 points) True/False Clearly put your answers in the answer box in front

More information

Module Contact: Dr Geoff McKeown, CMP Copyright of the University of East Anglia Version 1

Module Contact: Dr Geoff McKeown, CMP Copyright of the University of East Anglia Version 1 UNIVERSITY OF EAST ANGLIA School of Computing Sciences Main Series UG Examination 2015-16 DATA STRUCTURES AND ALGORITHMS CMP-5014Y Time allowed: 3 hours Section A (Attempt any 4 questions: 60 marks) Section

More information

COMP Data Structures

COMP Data Structures COMP 2140 - Data Structures Shahin Kamali Topic 5 - Sorting University of Manitoba Based on notes by S. Durocher. COMP 2140 - Data Structures 1 / 55 Overview Review: Insertion Sort Merge Sort Quicksort

More information

Hashing. 1. Introduction. 2. Direct-address tables. CmSc 250 Introduction to Algorithms

Hashing. 1. Introduction. 2. Direct-address tables. CmSc 250 Introduction to Algorithms Hashing CmSc 250 Introduction to Algorithms 1. Introduction Hashing is a method of storing elements in a table in a way that reduces the time for search. Elements are assumed to be records with several

More information

Data Structures and Algorithms. Chapter 7. Hashing

Data Structures and Algorithms. Chapter 7. Hashing 1 Data Structures and Algorithms Chapter 7 Werner Nutt 2 Acknowledgments The course follows the book Introduction to Algorithms, by Cormen, Leiserson, Rivest and Stein, MIT Press [CLRST]. Many examples

More information

The complexity of Sorting and sorting in linear-time. Median and Order Statistics. Chapter 8 and Chapter 9

The complexity of Sorting and sorting in linear-time. Median and Order Statistics. Chapter 8 and Chapter 9 Subject 6 Spring 2017 The complexity of Sorting and sorting in linear-time Median and Order Statistics Chapter 8 and Chapter 9 Disclaimer: These abbreviated notes DO NOT substitute the textbook for this

More information

1. Meshes. D7013E Lecture 14

1. Meshes. D7013E Lecture 14 D7013E Lecture 14 Quadtrees Mesh Generation 1. Meshes Input: Components in the form of disjoint polygonal objects Integer coordinates, 0, 45, 90, or 135 angles Output: A triangular mesh Conforming: A triangle

More information

Data Structures and Algorithm Analysis (CSC317) Hash tables (part2)

Data Structures and Algorithm Analysis (CSC317) Hash tables (part2) Data Structures and Algorithm Analysis (CSC317) Hash tables (part2) Hash table We have elements with key and satellite data Operations performed: Insert, Delete, Search/lookup We don t maintain order information

More information

Sorting and Selection

Sorting and Selection Sorting and Selection Introduction Divide and Conquer Merge-Sort Quick-Sort Radix-Sort Bucket-Sort 10-1 Introduction Assuming we have a sequence S storing a list of keyelement entries. The key of the element

More information

CS 137 Part 8. Merge Sort, Quick Sort, Binary Search. November 20th, 2017

CS 137 Part 8. Merge Sort, Quick Sort, Binary Search. November 20th, 2017 CS 137 Part 8 Merge Sort, Quick Sort, Binary Search November 20th, 2017 This Week We re going to see two more complicated sorting algorithms that will be our first introduction to O(n log n) sorting algorithms.

More information

Divide-and-Conquer. The most-well known algorithm design strategy: smaller instances. combining these solutions

Divide-and-Conquer. The most-well known algorithm design strategy: smaller instances. combining these solutions Divide-and-Conquer The most-well known algorithm design strategy: 1. Divide instance of problem into two or more smaller instances 2. Solve smaller instances recursively 3. Obtain solution to original

More information

! A Hash Table is used to implement a set, ! The table uses a function that maps an. ! The function is called a hash function.

! A Hash Table is used to implement a set, ! The table uses a function that maps an. ! The function is called a hash function. Hash Tables Chapter 20 CS 3358 Summer II 2013 Jill Seaman Sections 201, 202, 203, 204 (not 2042), 205 1 What are hash tables?! A Hash Table is used to implement a set, providing basic operations in constant

More information

Treaps. 1 Binary Search Trees (BSTs) CSE341T/CSE549T 11/05/2014. Lecture 19

Treaps. 1 Binary Search Trees (BSTs) CSE341T/CSE549T 11/05/2014. Lecture 19 CSE34T/CSE549T /05/04 Lecture 9 Treaps Binary Search Trees (BSTs) Search trees are tree-based data structures that can be used to store and search for items that satisfy a total order. There are many types

More information

Chapter 9: Maps, Dictionaries, Hashing

Chapter 9: Maps, Dictionaries, Hashing Chapter 9: 0 1 025-612-0001 2 981-101-0002 3 4 451-229-0004 Maps, Dictionaries, Hashing Nancy Amato Parasol Lab, Dept. CSE, Texas A&M University Acknowledgement: These slides are adapted from slides provided

More information

Quick Sort. CSE Data Structures May 15, 2002

Quick Sort. CSE Data Structures May 15, 2002 Quick Sort CSE 373 - Data Structures May 15, 2002 Readings and References Reading Section 7.7, Data Structures and Algorithm Analysis in C, Weiss Other References C LR 15-May-02 CSE 373 - Data Structures

More information

Lecture 7. Transform-and-Conquer

Lecture 7. Transform-and-Conquer Lecture 7 Transform-and-Conquer 6-1 Transform and Conquer This group of techniques solves a problem by a transformation to a simpler/more convenient instance of the same problem (instance simplification)

More information

17/05/2018. Outline. Outline. Divide and Conquer. Control Abstraction for Divide &Conquer. Outline. Module 2: Divide and Conquer

17/05/2018. Outline. Outline. Divide and Conquer. Control Abstraction for Divide &Conquer. Outline. Module 2: Divide and Conquer Module 2: Divide and Conquer Divide and Conquer Control Abstraction for Divide &Conquer 1 Recurrence equation for Divide and Conquer: If the size of problem p is n and the sizes of the k sub problems are

More information

Tree-Structured Indexes

Tree-Structured Indexes Tree-Structured Indexes Yanlei Diao UMass Amherst Slides Courtesy of R. Ramakrishnan and J. Gehrke Access Methods v File of records: Abstraction of disk storage for query processing (1) Sequential scan;

More information

DATA STRUCTURES AND ALGORITHMS

DATA STRUCTURES AND ALGORITHMS LECTURE 11 Babeş - Bolyai University Computer Science and Mathematics Faculty 2017-2018 In Lecture 9-10... Hash tables ADT Stack ADT Queue ADT Deque ADT Priority Queue Hash tables Today Hash tables 1 Hash

More information

Symbol Table. Symbol table is used widely in many applications. dictionary is a kind of symbol table data dictionary is database management

Symbol Table. Symbol table is used widely in many applications. dictionary is a kind of symbol table data dictionary is database management Hashing Symbol Table Symbol table is used widely in many applications. dictionary is a kind of symbol table data dictionary is database management In general, the following operations are performed on

More information

( D. Θ n. ( ) f n ( ) D. Ο%

( D. Θ n. ( ) f n ( ) D. Ο% CSE 0 Name Test Spring 0 Multiple Choice. Write your answer to the LEFT of each problem. points each. The time to run the code below is in: for i=n; i>=; i--) for j=; j

More information

4. Sorting and Order-Statistics

4. Sorting and Order-Statistics 4. Sorting and Order-Statistics 4. Sorting and Order-Statistics The sorting problem consists in the following : Input : a sequence of n elements (a 1, a 2,..., a n ). Output : a permutation (a 1, a 2,...,

More information

Hashing. Yufei Tao. Department of Computer Science and Engineering Chinese University of Hong Kong

Hashing. Yufei Tao. Department of Computer Science and Engineering Chinese University of Hong Kong Department of Computer Science and Engineering Chinese University of Hong Kong In this lecture, we will revisit the dictionary search problem, where we want to locate an integer v in a set of size n or

More information

Unit 6 Chapter 15 EXAMPLES OF COMPLEXITY CALCULATION

Unit 6 Chapter 15 EXAMPLES OF COMPLEXITY CALCULATION DESIGN AND ANALYSIS OF ALGORITHMS Unit 6 Chapter 15 EXAMPLES OF COMPLEXITY CALCULATION http://milanvachhani.blogspot.in EXAMPLES FROM THE SORTING WORLD Sorting provides a good set of examples for analyzing

More information

INSTITUTE OF AERONAUTICAL ENGINEERING

INSTITUTE OF AERONAUTICAL ENGINEERING INSTITUTE OF AERONAUTICAL ENGINEERING (Autonomous) Dundigal, Hyderabad - 500 043 COMPUTER SCIENCE AND ENGINEERING TUTORIAL QUESTION BANK Course Name Course Code Class Branch DATA STRUCTURES ACS002 B. Tech

More information

) $ f ( n) " %( g( n)

) $ f ( n)  %( g( n) CSE 0 Name Test Spring 008 Last Digits of Mav ID # Multiple Choice. Write your answer to the LEFT of each problem. points each. The time to compute the sum of the n elements of an integer array is: # A.

More information

CSE 332: Data Structures & Parallelism Lecture 12: Comparison Sorting. Ruth Anderson Winter 2019

CSE 332: Data Structures & Parallelism Lecture 12: Comparison Sorting. Ruth Anderson Winter 2019 CSE 332: Data Structures & Parallelism Lecture 12: Comparison Sorting Ruth Anderson Winter 2019 Today Sorting Comparison sorting 2/08/2019 2 Introduction to sorting Stacks, queues, priority queues, and

More information

Sorting: Given a list A with n elements possessing a total order, return a list with the same elements in non-decreasing order.

Sorting: Given a list A with n elements possessing a total order, return a list with the same elements in non-decreasing order. Sorting The sorting problem is defined as follows: Sorting: Given a list A with n elements possessing a total order, return a list with the same elements in non-decreasing order. Remember that total order

More information

The divide-and-conquer paradigm involves three steps at each level of the recursion: Divide the problem into a number of subproblems.

The divide-and-conquer paradigm involves three steps at each level of the recursion: Divide the problem into a number of subproblems. 2.3 Designing algorithms There are many ways to design algorithms. Insertion sort uses an incremental approach: having sorted the subarray A[1 j - 1], we insert the single element A[j] into its proper

More information

logn D. Θ C. Θ n 2 ( ) ( ) f n B. nlogn Ο n2 n 2 D. Ο & % ( C. Θ # ( D. Θ n ( ) Ω f ( n)

logn D. Θ C. Θ n 2 ( ) ( ) f n B. nlogn Ο n2 n 2 D. Ο & % ( C. Θ # ( D. Θ n ( ) Ω f ( n) CSE 0 Test Your name as it appears on your UTA ID Card Fall 0 Multiple Choice:. Write the letter of your answer on the line ) to the LEFT of each problem.. CIRCLED ANSWERS DO NOT COUNT.. points each. The

More information

Chapter 7. Space and Time Tradeoffs. Copyright 2007 Pearson Addison-Wesley. All rights reserved.

Chapter 7. Space and Time Tradeoffs. Copyright 2007 Pearson Addison-Wesley. All rights reserved. Chapter 7 Space and Time Tradeoffs Copyright 2007 Pearson Addison-Wesley. All rights reserved. Space-for-time tradeoffs Two varieties of space-for-time algorithms: input enhancement preprocess the input

More information

Introduction. hashing performs basic operations, such as insertion, better than other ADTs we ve seen so far

Introduction. hashing performs basic operations, such as insertion, better than other ADTs we ve seen so far Chapter 5 Hashing 2 Introduction hashing performs basic operations, such as insertion, deletion, and finds in average time better than other ADTs we ve seen so far 3 Hashing a hash table is merely an hashing

More information

Lecture 8: Mergesort / Quicksort Steven Skiena

Lecture 8: Mergesort / Quicksort Steven Skiena Lecture 8: Mergesort / Quicksort Steven Skiena Department of Computer Science State University of New York Stony Brook, NY 11794 4400 http://www.cs.stonybrook.edu/ skiena Problem of the Day Give an efficient

More information

Today s Outline. CS 561, Lecture 8. Direct Addressing Problem. Hash Tables. Hash Tables Trees. Jared Saia University of New Mexico

Today s Outline. CS 561, Lecture 8. Direct Addressing Problem. Hash Tables. Hash Tables Trees. Jared Saia University of New Mexico Today s Outline CS 561, Lecture 8 Jared Saia University of New Mexico Hash Tables Trees 1 Direct Addressing Problem Hash Tables If universe U is large, storing the array T may be impractical Also much

More information

Randomized Algorithms, Hash Functions

Randomized Algorithms, Hash Functions Randomized Algorithms, Hash Functions Lecture A Tiefenbruck MWF 9-9:50am Center 212 Lecture B Jones MWF 2-2:50pm Center 214 Lecture C Tiefenbruck MWF 11-11:50am Center 212 http://cseweb.ucsd.edu/classes/wi16/cse21-abc/

More information

UNIT-2. Problem of size n. Sub-problem 1 size n/2. Sub-problem 2 size n/2. Solution to the original problem

UNIT-2. Problem of size n. Sub-problem 1 size n/2. Sub-problem 2 size n/2. Solution to the original problem Divide-and-conquer method: Divide-and-conquer is probably the best known general algorithm design technique. The principle behind the Divide-and-conquer algorithm design technique is that it is easier

More information

of characters from an alphabet, then, the hash function could be:

of characters from an alphabet, then, the hash function could be: Module 7: Hashing Dr. Natarajan Meghanathan Professor of Computer Science Jackson State University Jackson, MS 39217 E-mail: natarajan.meghanathan@jsums.edu Hashing A very efficient method for implementing

More information

CS 303 Design and Analysis of Algorithms

CS 303 Design and Analysis of Algorithms Mid-term CS 303 Design and Analysis of Algorithms Review For Midterm Dong Xu (Based on class note of David Luebke) 12:55-1:55pm, Friday, March 19 Close book Bring your calculator 30% of your final score

More information

Comparison Sorts. Chapter 9.4, 12.1, 12.2

Comparison Sorts. Chapter 9.4, 12.1, 12.2 Comparison Sorts Chapter 9.4, 12.1, 12.2 Sorting We have seen the advantage of sorted data representations for a number of applications Sparse vectors Maps Dictionaries Here we consider the problem of

More information

Final Examination CSE 100 UCSD (Practice)

Final Examination CSE 100 UCSD (Practice) Final Examination UCSD (Practice) RULES: 1. Don t start the exam until the instructor says to. 2. This is a closed-book, closed-notes, no-calculator exam. Don t refer to any materials other than the exam

More information

Chapter 12: Indexing and Hashing. Basic Concepts

Chapter 12: Indexing and Hashing. Basic Concepts Chapter 12: Indexing and Hashing! Basic Concepts! Ordered Indices! B+-Tree Index Files! B-Tree Index Files! Static Hashing! Dynamic Hashing! Comparison of Ordered Indexing and Hashing! Index Definition

More information

Hash Open Indexing. Data Structures and Algorithms CSE 373 SP 18 - KASEY CHAMPION 1

Hash Open Indexing. Data Structures and Algorithms CSE 373 SP 18 - KASEY CHAMPION 1 Hash Open Indexing Data Structures and Algorithms CSE 373 SP 18 - KASEY CHAMPION 1 Warm Up Consider a StringDictionary using separate chaining with an internal capacity of 10. Assume our buckets are implemented

More information

HASH TABLES. Hash Tables Page 1

HASH TABLES. Hash Tables Page 1 HASH TABLES TABLE OF CONTENTS 1. Introduction to Hashing 2. Java Implementation of Linear Probing 3. Maurer s Quadratic Probing 4. Double Hashing 5. Separate Chaining 6. Hash Functions 7. Alphanumeric

More information

CS 561, Lecture 2 : Randomization in Data Structures. Jared Saia University of New Mexico

CS 561, Lecture 2 : Randomization in Data Structures. Jared Saia University of New Mexico CS 561, Lecture 2 : Randomization in Data Structures Jared Saia University of New Mexico Outline Hash Tables Bloom Filters Skip Lists 1 Dictionary ADT A dictionary ADT implements the following operations

More information

Hash Tables Outline. Definition Hash functions Open hashing Closed hashing. Efficiency. collision resolution techniques. EECS 268 Programming II 1

Hash Tables Outline. Definition Hash functions Open hashing Closed hashing. Efficiency. collision resolution techniques. EECS 268 Programming II 1 Hash Tables Outline Definition Hash functions Open hashing Closed hashing collision resolution techniques Efficiency EECS 268 Programming II 1 Overview Implementation style for the Table ADT that is good

More information

CSE373: Data Structure & Algorithms Lecture 21: More Comparison Sorting. Aaron Bauer Winter 2014

CSE373: Data Structure & Algorithms Lecture 21: More Comparison Sorting. Aaron Bauer Winter 2014 CSE373: Data Structure & Algorithms Lecture 21: More Comparison Sorting Aaron Bauer Winter 2014 The main problem, stated carefully For now, assume we have n comparable elements in an array and we want

More information

ECE 122. Engineering Problem Solving Using Java

ECE 122. Engineering Problem Solving Using Java ECE 122 Engineering Problem Solving Using Java Lecture 27 Linear and Binary Search Overview Problem: How can I efficiently locate data within a data structure Searching for data is a fundamental function

More information

Computational Optimization ISE 407. Lecture 16. Dr. Ted Ralphs

Computational Optimization ISE 407. Lecture 16. Dr. Ted Ralphs Computational Optimization ISE 407 Lecture 16 Dr. Ted Ralphs ISE 407 Lecture 16 1 References for Today s Lecture Required reading Sections 6.5-6.7 References CLRS Chapter 22 R. Sedgewick, Algorithms in

More information

Sorting. Popular algorithms: Many algorithms for sorting in parallel also exist.

Sorting. Popular algorithms: Many algorithms for sorting in parallel also exist. Sorting Popular algorithms: Selection sort* Insertion sort* Bubble sort* Quick sort* Comb-sort Shell-sort Heap sort* Merge sort* Counting-sort Radix-sort Bucket-sort Tim-sort Many algorithms for sorting

More information

Fundamental Algorithms

Fundamental Algorithms Fundamental Algorithms Chapter 7: Hash Tables Michael Bader Winter 2011/12 Chapter 7: Hash Tables, Winter 2011/12 1 Generalised Search Problem Definition (Search Problem) Input: a sequence or set A of

More information

EECS 2011M: Fundamentals of Data Structures

EECS 2011M: Fundamentals of Data Structures M: Fundamentals of Data Structures Instructor: Suprakash Datta Office : LAS 3043 Course page: http://www.eecs.yorku.ca/course/2011m Also on Moodle Note: Some slides in this lecture are adopted from James

More information

Sorting Algorithms. CptS 223 Advanced Data Structures. Larry Holder School of Electrical Engineering and Computer Science Washington State University

Sorting Algorithms. CptS 223 Advanced Data Structures. Larry Holder School of Electrical Engineering and Computer Science Washington State University Sorting Algorithms CptS 223 Advanced Data Structures Larry Holder School of Electrical Engineering and Computer Science Washington State University 1 QuickSort Divide-and-conquer approach to sorting Like

More information

Chapter 12: Indexing and Hashing

Chapter 12: Indexing and Hashing Chapter 12: Indexing and Hashing Basic Concepts Ordered Indices B+-Tree Index Files B-Tree Index Files Static Hashing Dynamic Hashing Comparison of Ordered Indexing and Hashing Index Definition in SQL

More information

Lecture 9: Sorting Algorithms

Lecture 9: Sorting Algorithms Lecture 9: Sorting Algorithms Bo Tang @ SUSTech, Spring 2018 Sorting problem Sorting Problem Input: an array A[1..n] with n integers Output: a sorted array A (in ascending order) Problem is: sort A[1..n]

More information

Sorting is a problem for which we can prove a non-trivial lower bound.

Sorting is a problem for which we can prove a non-trivial lower bound. Sorting The sorting problem is defined as follows: Sorting: Given a list a with n elements possessing a total order, return a list with the same elements in non-decreasing order. Remember that total order

More information

( ) + n. ( ) = n "1) + n. ( ) = T n 2. ( ) = 2T n 2. ( ) = T( n 2 ) +1

( ) + n. ( ) = n 1) + n. ( ) = T n 2. ( ) = 2T n 2. ( ) = T( n 2 ) +1 CSE 0 Name Test Summer 00 Last Digits of Student ID # Multiple Choice. Write your answer to the LEFT of each problem. points each. Suppose you are sorting millions of keys that consist of three decimal

More information

IS 2610: Data Structures

IS 2610: Data Structures IS 2610: Data Structures Searching March 29, 2004 Symbol Table A symbol table is a data structure of items with keys that supports two basic operations: insert a new item, and return an item with a given

More information

Lecture 17. Improving open-addressing hashing. Brent s method. Ordered hashing CSE 100, UCSD: LEC 17. Page 1 of 19

Lecture 17. Improving open-addressing hashing. Brent s method. Ordered hashing CSE 100, UCSD: LEC 17. Page 1 of 19 Lecture 7 Improving open-addressing hashing Brent s method Ordered hashing Page of 9 Improving open addressing hashing Recall the average case unsuccessful and successful find time costs for common openaddressing

More information

How many leaves on the decision tree? There are n! leaves, because every permutation appears at least once.

How many leaves on the decision tree? There are n! leaves, because every permutation appears at least once. Chapter 8. Sorting in Linear Time Types of Sort Algorithms The only operation that may be used to gain order information about a sequence is comparison of pairs of elements. Quick Sort -- comparison-based

More information