III Data Structures. Dynamic sets

Similar documents
Worst-case running time for RANDOMIZED-SELECT

9/24/ Hash functions

We assume uniform hashing (UH):

COMP171. Hashing.

Hash Tables. Reading: Cormen et al, Sections 11.1 and 11.2

CS 270 Algorithms. Oliver Kullmann. Generalising arrays. Direct addressing. Hashing in general. Hashing through chaining. Reading from CLRS for week 7

Hashing. Hashing Procedures

Week 9. Hash tables. 1 Generalising arrays. 2 Direct addressing. 3 Hashing in general. 4 Hashing through chaining. 5 Hash functions.

TABLES AND HASHING. Chapter 13

Algorithms and Data Structures

Hash Table and Hashing

1. Attempt any three of the following: 15

Data Structures and Algorithms. Chapter 7. Hashing

3. Fundamental Data Structures

Data Structures and Algorithms. Roberto Sebastiani

CS301 - Data Structures Glossary By

Let the dynamic table support the operations TABLE-INSERT and TABLE-DELETE It is convenient to use the load factor ( )

HashTable CISC5835, Computer Algorithms CIS, Fordham Univ. Instructor: X. Zhang Fall 2018

DOWNLOAD PDF LINKED LIST PROGRAMS IN DATA STRUCTURE

Algorithms in Systems Engineering IE172. Midterm Review. Dr. Ted Ralphs

V Advanced Data Structures

Hashing Techniques. Material based on slides by George Bebis

DATA STRUCTURES/UNIT 3

Acknowledgement HashTable CISC4080, Computer Algorithms CIS, Fordham Univ.

1. [1 pt] What is the solution to the recurrence T(n) = 2T(n-1) + 1, T(1) = 1

CS350 - Exam 1 (100 Points)

University of Waterloo Department of Electrical and Computer Engineering ECE250 Algorithms and Data Structures Fall 2014

V Advanced Data Structures

AAL 217: DATA STRUCTURES

( D. Θ n. ( ) f n ( ) D. Ο%

Tirgul 7. Hash Tables. In a hash table, we allocate an array of size m, which is much smaller than U (the set of keys).

SFU CMPT Lecture: Week 8

16 Greedy Algorithms

Algorithms with numbers (2) CISC4080, Computer Algorithms CIS, Fordham Univ.! Instructor: X. Zhang Spring 2017

Algorithms with numbers (2) CISC4080, Computer Algorithms CIS, Fordham Univ. Acknowledgement. Support for Dictionary

Dictionaries and Hash Tables

Chapter 5 Hashing. Introduction. Hashing. Hashing Functions. hashing performs basic operations, such as insertion,

Fundamental Algorithms

Introduction. hashing performs basic operations, such as insertion, better than other ADTs we ve seen so far

Today s Outline. CS 561, Lecture 8. Direct Addressing Problem. Hash Tables. Hash Tables Trees. Jared Saia University of New Mexico

CS 241 Analysis of Algorithms

Question Bank Subject: Advanced Data Structures Class: SE Computer

Hashing. Yufei Tao. Department of Computer Science and Engineering Chinese University of Hong Kong

General Idea. Key could be an integer, a string, etc e.g. a name or Id that is a part of a large employee structure

HO #13 Fall 2015 Gary Chan. Hashing (N:12)

CSCI Analysis of Algorithms I

Algorithms in Systems Engineering ISE 172. Lecture 12. Dr. Ted Ralphs

CMSC 341 Lecture 16/17 Hashing, Parts 1 & 2

Basic Data Structures (Version 7) Name:

Hashing. Dr. Ronaldo Menezes Hugo Serrano. Ronaldo Menezes, Florida Tech

Lecture 7: Efficient Collections via Hashing

Hashing. 1. Introduction. 2. Direct-address tables. CmSc 250 Introduction to Algorithms

Hash Tables and Hash Functions

logn D. Θ C. Θ n 2 ( ) ( ) f n B. nlogn Ο n2 n 2 D. Ο & % ( C. Θ # ( D. Θ n ( ) Ω f ( n)

HASH TABLES. Hash Tables Page 1

31.6 Powers of an element

( ). Which of ( ) ( ) " #& ( ) " # g( n) ( ) " # f ( n) Test 1

4.1 COMPUTATIONAL THINKING AND PROBLEM-SOLVING

( ) n 3. n 2 ( ) D. Ο

Hash Tables. Hashing Probing Separate Chaining Hash Function

DATA STRUCTURES AND ALGORITHMS

1 Probability Review. CS 124 Section #8 Hashing, Skip Lists 3/20/17. Expectation (weighted average): the expectation of a random quantity X is:

Data and File Structures Chapter 11. Hashing

Data Structures. Topic #6

( ) 1 B. 1. Suppose f x

data structures and algorithms lecture 6

Linked lists (6.5, 16)

) $ f ( n) " %( g( n)

CMSC 341 Hashing (Continued) Based on slides from previous iterations of this course

5. Hashing. 5.1 General Idea. 5.2 Hash Function. 5.3 Separate Chaining. 5.4 Open Addressing. 5.5 Rehashing. 5.6 Extendible Hashing. 5.

Algorithms. 演算法 Data Structures (focus on Trees)

& ( D. " mnp ' ( ) n 3. n 2. ( ) C. " n

Outline. Definition. 2 Height-Balance. 3 Searches. 4 Rotations. 5 Insertion. 6 Deletions. 7 Reference. 1 Every node is either red or black.

CS 161 Problem Set 4

Understand how to deal with collisions

Summer Final Exam Review Session August 5, 2009

Module 1: Asymptotic Time Complexity and Intro to Abstract Data Types

Discussion 2C Notes (Week 3, January 21) TA: Brian Choi Section Webpage:

DATA STRUCTURES AND ALGORITHMS

Advanced Algorithms and Data Structures

( ) + n. ( ) = n "1) + n. ( ) = T n 2. ( ) = 2T n 2. ( ) = T( n 2 ) +1

Data Structures and Algorithm Analysis (CSC317) Hash tables (part2)

University of Illinois at Urbana-Champaign Department of Computer Science. Final Examination

Test 1 Last 4 Digits of Mav ID # Multiple Choice. Write your answer to the LEFT of each problem. 2 points each t 1

Test points UTA Student ID #

Hashing. October 19, CMPE 250 Hashing October 19, / 25

HASH TABLES.

How many leaves on the decision tree? There are n! leaves, because every permutation appears at least once.

1 P age DS & OOPS / UNIT II

22 Elementary Graph Algorithms. There are two standard ways to represent a

n 2 ( ) ( ) Ο f ( n) ( ) Ω B. n logn Ο

Computer Science 210 Data Structures Siena College Fall Topic Notes: Linear Structures

CPSC 331 Term Test #2 March 26, 2007

CS 310 Hash Tables, Page 1. Hash Tables. CS 310 Hash Tables, Page 2

INSTITUTE OF AERONAUTICAL ENGINEERING

8. Fundamental Data Structures

Module 2: Classical Algorithm Design Techniques

Most of this PDF is editable. You can either type your answers in the red boxes/lines, or you can write them neatly by hand.

Overview of Data. 1. Array 2. Linked List 3. Stack 4. Queue

Introduction to Algorithms October 12, 2005 Massachusetts Institute of Technology Professors Erik D. Demaine and Charles E. Leiserson Quiz 1.

Transcription:

III Data Structures Elementary Data Structures Hash Tables Binary Search Trees Red-Black Trees Dynamic sets Sets are fundamental to computer science Algorithms may require several different types of operations to be performed on sets For example, many algorithms need only the ability to insert elements into, delete elements from, and test membership in a set We call a dynamic set that supports these operations a dictionary 267 1

Operations on dynamic sets Operations on a dynamic set can be grouped into queries and modifying operations SEARCH ) Given a set and a key value, return a pointer to an element in such that, or NIL if no such element belongs to INSERT ) Augment the set with the element pointed to by. We assume that any attributes in element have already been initialized DELETE ) Given a pointer to an element in the set, remove from. Note that this operation takes a pointer to an element, not a key value MINIMUM ) A query on a totally ordered set that returns a pointer to the element of with the smallest key 268 MAXIMUM ) Return a pointer to the element of with the largest key SUCCESSOR ) Given an element whose key is from a totally ordered set, return a pointer to the next larger element in, or NIL if is the maximum element PREDECESSOR ) Given an element whose key is from a totally ordered set, return a pointer to the next smaller element in, or NIL if is the minimum element We usually measure the time taken to execute a set operation in terms of the size of the set E.g., we later describe a data structure that can support any of the operations on a set of size in time (lg ) 269 2

10 Elementary Data Structures 10.1 Stacks and queues In stacks and queues the element removed from the set by DELETE is prespeci ed In a stack, the element deleted is the one most recently inserted: the stack implements a lastin, rst-out (LIFO) policy In a queue the element deleted is the one that has been in the set for the longest time: the queue policy is rst-in, rst-out (FIFO) We consider array to implementation of stacks and queues 270 Stacks INSERT operation on a stack is called PUSH DELETE operation, which does not take an element argument, is called POP These names are allusions to physical stacks 271 3

We can implement a stack of at most elements with an array [1.. The array has an attribute that indexes the most recently inserted element The stack consists of elements [1.. ], where [1] is the element at the bottom of the stack and ] is the element at the top When =0, the stack is empty We can test to see whether the stack is empty by query operation STACK-EMPTY Popping an empty stack leads to under ow 272 STACK-EMPTY ) 1. if == 0 2. return TRUE 3. else return FALSE PUSH ) 1. + 1 2. POP ) 1. if STACK-EMPTY ) 2. error under ow 3. else. top. top 1 4. return + 1 Each stack operation takes (1) time 273 4

Queues The INSERT operation on a queue is ENQUEUE We call the DELETE operation DEQUEUE like the stack operation POP, DEQUEUE takes no element argument The queue has a head and a tail When an element is enqueued, it takes its place at the tail of the queue The element dequeued is always the one at the head of the queue 274 275 5

One way to implement a queue of at most 1elements using an array [1.. The queue has an attribute that indexes (points) to its head The attribute indexes the next location at which a newly arriving element will be inserted into the queue We wrap around : location 1 immediately follows location in a circular order When, the queue is empty Initially, we have. =. = 1 276 ENQUEUE ) 1. 2. if == 3. =1 4. else + 1 Both operations take (1) time DEQUEUE ) 1. ] 2. if == 3. = 1 4. else + 1 5. return 277 6

10.2 Linked lists The linear order in a linked list is determined by a pointer in each object Each element of a doubly linked list is an object with an attribute and two other pointer attributes: and points to successor in the linked list, and points to predecessor If NIL, the element is the rst element, or head, of the list 278 If =NIL, the element is the last element, or tail, of the list Attribute points to the rst element of the list If. = NIL, the list is empty 279 7

Searching a linked list LIST-SEARCH ) 1. 2. while NIL and 3. 4. return To search a list of objects, LIST-SEARCH takes ) time in the worst case, since it may have to search the entire list 280 Inserting into a linked list LIST-INSERT ) 1. 2. if NIL 3. 4. 5. =NIL The running time for LIST-INSERT is (1) 281 8

Deleting from a linked list LIST-DELETE ) 1. if NIL 2. 3. else 4. if NIL 5. This runs in (1) time, if we wish to delete an element with a given key, ) time is required 282 Sentinels LIST-DELETE would be simpler if we could ignore the boundary conditions at the head and tail of the list: LIST-DELETE ) 1. 2. 283 9

A sentinel is a dummy object that allows us to simplify boundary conditions We provide with list an object that represents NIL but has all the attributes of the other objects in the list A reference to NIL in list code is replaced by a reference to the sentinel This turns a doubly linked list into a circular, doubly linked list with a sentinel, the sentinel lies between the head and tail 284 Attribute points to the head of the list, and points to the tail Similarly, both the next attribute of the tail and the prev attribute of the head point to Since points to the head, we can eliminate the attribute. altogether 285 10

The code for LIST-SEARCH remains the same as before, but references to NIL and change: LIST-SEARCH ) 1. 2. while and 3. 4. return 286 Sentinels rarely reduce the asymptotic time bounds, but can reduce constant factors The gain is usually a matter of clarity of code rather than speed In other situations, however, the use of sentinels helps to tighten the code in a loop, thus reducing the coef cient of, say, or in the running time Sentinels should be used judiciously: When there are many small lists, the extra storage used by their sentinels can represent signi cant wasted memory 287 11

11 Hash Tables Often one only needs the dictionary operations INSERT, SEARCH, and DELETE Hash table effectively implements dictionaries In the worst case, searching for an element in a hash table takes ) time In practice, hashing performs extremely well Under reasonable assumptions, the average time to search for an element is (1) 288 11.2 Hash tables A set of keys is stored in a dictionary, it is usually much smaller than the universe of all possible keys A hash table requires storage ( ) while search for an element in it only takes (1) time The catch is that this bound is for the average-case time 289 12

An element with key is stored in slot ( ); that is, we use a hash function to compute the slot from the key 0,1,, 1 maps the universe of keys into the slots of hash table [0.. 1] The size of the hash table is typically We say that an element with key hashes to slot or that ) is the hash value of If hash to same slot we have a collision Effective techniques resolve the con ict 290 291 13

The ideal solution avoids collisions altogether We might try to achieve this goal by choosing a suitable hash function One idea is to make appear to be random, thus minimizing the number of collisions Of course, must be deterministic so that a key always produces the same output ) Because, there must be at least two keys that have the same hash value; avoiding collisions altogether is therefore impossible 292 Collision resolution by chaining In chaining, we place all the elements that hash to the same slot into the same linked list Slot contains a pointer to the head of the list of all stored elements that hash to If no such elements exist, slot contains NIL The dictionary operations on a hash table are easy to implement when collisions are resolved by chaining 293 14

CHAINED-HASH-INSERT ) 1. insert at the head of list ] CHAINED-HASH-SEARCH ) 1. search for element with key in list ] CHAINED-HASH-DELETE ) 1. delete from the list [. ] 294 Worst-case running time of insertion is (1) It is fast in part because it assumes that the element being inserted is not already present in the table For searching, the worst-case running time is proportional to the length of the list; we analyze this operation more closely soon We can delete an element (given a pointer) in (1) time if the lists are doubly linked In singly linked lists, to delete, we would rst have to nd in the list [. ] 295 15

Analysis of hashing with chaining A hash table of slots stores elements, de ne the load factor, i.e., the average number of elements in a chain Our analysis will be in terms of, which can be <1,=1, or >1 In the worst-case all keys hash to one slot The worst-case time for searching is thus ) plus the time to compute the hash function 296 Average-case performance of hashing depends on how well distributes the set of keys to be stored among the slots, on the average Simple uniform hashing (SUH): Assume that an element is equally likely to hash into any of the slots, independently of where any other element has hashed to For = 0,1,, 1, let us denote the length of the list ] by, so and the expected value of is E[ ] = = / 297 16

Theorem 11.1 In a hash table which resolves collisions by chaining, an unsuccessful search takes average-case time (1 + ), under SUH assumption. Proof Under SUH, not already in the table is equally likely to hash to any of the slots. The time to search unsuccessfully for is the expected time to go through list )], which has expected length E[ ) ]=. Thus, the expected number of elements examined is, and the total time required (including computing ( )) is (1 + ). 298 Theorem 11.2 In a hash table which resolves collisions by chaining, a successful search takes average-case time (1 + ), under SUH. Proof We assume that the element being searched for is equally likely to be any of the elements stored in the table. The number of elements examined during the search for is one more than the number of elements that appear before in s list. Elements before in the list were all inserted after was inserted. 299 17

Let us take the average (over the table elements ) of the expected number of elements added to s list after was added to the list + 1 Let, = 1,2,,, denote the th element inserted into the table and let For keys and, de ne the indicator random variable = I{ } Under SUH, Pr = 1, and by Lemma 5.1, E[ ] = 1 300 Thus, the expected number of elements examined in successful search is E 1 1+ = 1 1+ E = 1 1+ 1 301 18

=1+ 1 =1+ 1 ) =1+ 1 + 1) 2 =1+ 1 =1+ 2 + Thus, the total time required for a successful search is 2 + 2 + = 1 + 302 If the number of hash-table slots is at least proportional to the number of elements in the table, we have ) and, consequently, (1) Thus, searching takes constant time on average Insertion takes (1) worst-case time Deletion takes 1 worst-case time when the lists are doubly linked Hence, we can support all dictionary operations in (1) time on average 303 19

11.3 Hash functions A good hash function satis es (approximately) the assumption of SUH: each key is equally likely to hash to any of the slots, independently of the other keys We typically have no way to check this condition, since we rarely know the probability distribution from which the keys are drawn The keys might not be drawn independently 304 If we, e.g., know that the keys are random real numbers independently and uniformly distributed in the range <1 Then the hash function = satis es the condition of SUH In practice, we can often employ heuristic techniques to create a hash function that performs well Qualitative information about the distribution of keys may be useful in this design process 305 20

In a compiler s symbol table the keys are strings representing identi ers in a program Closely related symbols, such as pt and pts, often occur in the same program A good hash function minimizes the chance that such variants hash to the same slot A good approach derives the hash value in a way that we expect to be independent of any patterns that might exist in the data For example, the division method computes the hash value as the remainder when the key is divided by a speci ed prime number 306 Interpreting keys as natural numbers Hash functions assume that the universe of keys is natural numbers = {0,1,2, } We can interpret a character string as an integer expressed in suitable radix notation We interpret the identi erpt as the pair of decimal integers (112,116), since = 112 and = 116 in ASCII Expressed as a radix-128 integer, pt becomes 112 128 + 116 = 14,452 307 21

11.3.1 The division method In the division method, we map a key into one of slots by taking the remainder of divided by That is, the hash function is mod E.g., if the hash table has size = 12 and the key is = 100, then =4 Since it requires only a single division operation, hashing by division is quite fast 308 When using the division method, we usually avoid certain values of E.g., should not be a power of 2, since if =2, then ) is just the lowest-order bits of We are better off designing the hash function to depend on all the bits of the key A prime not too close to an exact power of 2 is often a good choice for 309 22

Suppose we wish to allocate a hash table, with collisions resolved by chaining, to hold roughly = 2,000 character strings, where a character has 8 bits We don t mind examining an average of 3 elements in an unsuccessful search, and so we allocate a hash table of size = 701 We choose = 701 because it is a prime near 2000/3 but not near any power of 2 Treating each key as an integer, our hash function would be = mod 701 310 11.3.2 The multiplication method The method operates in two steps First, multiply by a constant, 0< <1, and extract the fractional part of Then, multiply this value by and take the oor of the result The hash function is = mod 1) where mod means the fractional part of, that is, 311 23

Value of is not critical, we choose = 2 The function is implemented on computers Suppose that the word size of the machine is bits and that ts into a single word Restrict to be a fraction of the form 2, where is an integer in the range 0< <2 Multiply by the -bit integer 2 The result is a -bit value 2, where is the high-order word of the product and is the low-order word of the product The desired -bit hash value consists of the most signi cant bits of 312 313 24

Knuth (1973) suggests that ( 1) 2 0.6180339887 is likely to work reasonably well Suppose that = 123,456, = 14, =2 = 16,384, and = 32 Let to be the fraction of the form 2 closest to ( 1), 2so that = 2,654,435,769/2 Then = 327,706,022,297,664 = 76,300 2 + 17,612,864, and = 76,300 and = 17,612,864 The 14 most signi cant bits of yield the value = 67 314 11.4 Open addressing In open addressing, all elements occupy the hash table itself That is, each table entry contains either an element of the dynamic set or NIL Search for element systematically examines table slots until we nd the element or have ascertained that it is not in the table No lists and no elements are stored outside the table, unlike in chaining 315 25

Thus, the hash table can ll up so that no further insertions can be made the load factor can never exceed 1 We could store the linked lists for chaining inside the hash table, in the otherwise unused hash-table slots Instead of following pointers, we compute the sequence of slots to be examined The extra memory freed by provides the hash table with a larger number of slots for the same amount of memory, potentially yielding fewer collisions and faster retrieval 316 To perform insertion, we successively examine, or probe, the hash table until we nd an empty slot in which to put the key Instead of being xed in order 0,1,, 1 (which requires ) search time), the sequence of positions probed depends upon the key being inserted To determine which slots to probe, we extend the hash function to include the probe number (starting from 0) as a second input 317 26

Thus, the hash function becomes 0,1,, {0,1,, 1} With open addressing, we require that for every key, the probe sequence,0,,1,, 1) be a permutation of 0,1,, 1, so that every position is eventually considered as a slot for a new key as the table lls up Let us assume that the elements in the hash table are keys with no satellite information; the key is identical to the element containing key 318 HASH-INSERT either returns the slot number of key or ags an error because the table is full HASH-INSERT ) 1. = 0 2. repeat 3. ) 4. if ] == NIL 5. 6. return 7. else +1 8. until == 9. error hash table over ow 319 27

Search for key probes the same sequence of slots that the insertion algorithm examined HASH-SEARCH ) 1. = 0 2. repeat 3. ) 4. if ]== 5. return 6. +1 7. until == NIL == 8. return NIL 320 When we delete a key from slot, we cannot simply mark it as empty by storing NIL in it We might be unable to retrieve any key during whose insertion we had probed slot Instead we mark the slot with value DELETED Modify HASH-INSERT to treat such a slot as empty so that we can insert a new key there HASH-SEARCH passes over DELETED values When we use DELETED value, search times no longer depend on the load factor Therefore chaining is more commonly selected as a collision resolution technique 321 28