# Hashing. =me to compute h independent of K ideally, then, lookup should take constant =me

Save this PDF as:

Size: px
Start display at page:

Download "Hashing. =me to compute h independent of K ideally, then, lookup should take constant =me"

## Transcription

1 Hashing Let K be a key space e.g., set of all alphanumeric strings Want to perform following typical opera=ons on keys insert, lookup and possibly delete Let T be a table of m memory loca=ons in which we can store (key,pointer) pairs. T is called a hash table. Let h be a func=on that maps keys into table loca=ons; i.e., h: K > [1,m]. h is called a hash func=on. =me to compute h independent of K ideally, then, lookup should take constant =me For any key, K j, we store the key (and a pointer to the record containing any informa=on associated with the key) at loca=on h(k j ) in the hash table

2 Hashing Suppose the set of keys to be inserted into the table is known in advance simplest case is when these keys have a regular structure example: Keys are all pairs of upper case characters. there are 26 2 keys h(c 1 c 2 ) = 26*(index(c 1 )1) + index(c 2 ) this is the standard lexicographic storage alloca=on func=on so sequen=al memory alloca=on for arrays is a special (uninteres=ng) case of hashing

3 Hashing But what if the known keys are some random sample of 12 character strings? then it would be very difficult to iden=fy a perfect hashing func=on, h, whose computa=onal =me complexity is independent of the size of the set of keys. could arrange the keys in a binary search tree, and use some ordering of the nodes in the tree to define a hashing func=on, but then it would take log(n) =me to compute h.

4 Hashing So, we have to define some general hashing func=on, independent of the elements of K that we happen to insert into the table no=ce, though, that if the number of keys we insert is comparable in size to the size of the table, T, then it is VERY unlikely that our hashing func=on would be 11, so that some subsets of keys will hash to the same address in T. When h(k 1 ) = h(k 2 ) we have a collision alterna=vely, we can make T >> the number of keys we might ever insert wastes storage and we s=ll might have collisions

5 Hashing Key technical problems: What is a good hash func=on? When two keys hash to the same table loca=on how do we resolve the collision? Generally, the =me to retrieve a key in a hash table is a func=on of the collision pa`ern it exhibits with respect to the previously inserted keys And how do memory hierarchies impact this? When we insert a key, is there a way to rearrange the keys to make subsequent lookups more efficient? What do we do if the table fills up (if its load factor exceeds some cri=cal value for good performance)?

6 Simple hashing func=ons A good hashing func=ons should : a) be easy to compute have constant =me complexity b) spread the keys out evenly (uniformly) in the hash table if key k is drawn randomly, probability that h(k) = i should be 1/m, where m = T, and independent of i. If k is regarded as an integer, then h(k) = kmod m is a good simple method a) if keys are alphabe=c, then m should not be a power of 2 since kmod m will then be the lower order bits of k, independent of the rest of k b) ordinarily choose m to be prime (several reasons, TBD)

7 Simple hashing func=ons K mod m works very well when the keys are really uniformly distributed from the space of all possible keys. But as the set of keys deviates from uniform, its performance degrades rapidly get lots of collision even though the table is nearly empty.

8 Conflict resolu=on by chaining Separate chaining: T[i] does not store a single key, but is a pointer to a list of all keys encountered such that h(k) = i. If the number of collisions is small, a simple linked list is sufficient to store all colliding keys Data structure is on two levels hash table that divides entries into m linked lists. Lists are referred to as buckets. During a lookup or insert opera=on, each access to the data structure is called a probe.

9 Separate chaining T d e dr ab gh re fed ton average number of probes to find a key in the table is (4*2 + 3*3 +4) / 8 = 2.67 For a hash table with m loca=ons, if we insert n records into the table, then we say the load factor of the table is a = n/m For the table above, a = 1.0 For a table with separate chaining, a can be less than or greater than 1. Other methods will only lead to a < 1. Expected cost of search is propor=onal to load factor

10 Coalesced chaining Seems crazy to use memory for links and keys maybe just make a bigger hash table to begin with and reduce frequency of collisions! But where do we store colliding keys? In the table itself! Use the empty cells of the hash table to store the elements in the chain When a collision is encountered during an inser=on, find the next open slot in the table and place the key there, linking it into the chain This slot might be required for a subsequent inser=on Just treat this as another collision and perform another sequen=al search for an open slot

11 Coalesced Chaining Lookup opera=on is implemented the same as separate hashing, but the implicit chains can contain elements with several different hash values But all elements with a given hash value are in the same chain If an empty slot is encountered, search fails

12 Example of coalesced chaining T d d dr e ab gh re fed ton d d 2 d 2 d 2 d 2 e e e e dr dr dr dr dr ab ab 7 gh ab 7 gh 8 re d e dr fe ab gh re Inser=on order: d dr e ab gh re fed ton d 2 e dr fe 5 to ab 7 gh 8 re 4

13 Open addressing strategies Generaliza=on of coalesced chaining eliminate links! For each key, its chain is stored in the hash table in a (key dependent) sequence of posi=ons probe sequence sequence of table posi=ons belonging to a chain H(k,p) denotes the p'th posi=on tried for key k, where p = 0, 1,... To search for a key, look at successive loca=ons in probe sequence un=l either the key is found or an empty table posi=on is encountered if key is to be inserted, it can be at that open posi=on

14 Sequen=al or linear probing H(k,p) = h(k) + pc If h(k) = i, then the probe sequence is i, i+c, i+2c,... c has to be rela=vely prime to m to ensure all loca=ons are on the sequence; let s assume c=1 In this case, if last posi=on in table is m1 and probe sequence reaches m1, next loca=on probed is 0 (wrap around) Insertion sequence: a c g a b i c a a a' c b c' a'' g i Key Probes to find

15 Insertion sequence: a c g a b i c a a Key Probes to find 1 Key a Probes to find 1 a Key Probes to find 1 c 1 a' c 2 1 a' c b g 1 g 1 g 1

16 Insertion sequence: a c g a b i c a a a' c b Key Probes to find a a' c b Key Probes to find a a' c b Key c' Probes to find g 1 g 1 g 1 i 1 i 1

17 a a' c b c' g Key Probes to find a a' c b c' g Key Probes to find a 6 1 i 1 i 1

18 Linear probing Easy to implement Good behavior when table is not full Primary clustering once a block of a few con=guous loca=ons is formed, it becomes a "target" for subsequent collisions collision makes the cluster become larger the larger the cluster, the bigger a target it is for more collisions Simple solu=on of using the probe sequence i + kj will not work clusters will just become sequences of the form i, i +kmod m, i+2k mod m,...

19 Open addressing with double hashing Clusters can be broken up if the probe sequence for a key is chosen independent of its primary posi=on Use two hash func=ons: 1) h determines the first loca=on in the probe sequence 2) h' will be used to determine the remainder of the sequence; h' will depend on k. H(k,0) = h(k) H(k, p+1) = (H(k,p) + h'(k)) mod m In linear probing, h'(k) = 1 for all k

20 Open addressing with double hashing Probe sequence should visit all of the entries in the table So, h'(k) must be greater than 0 and rela=vely prime to m for any k h'(k) and m should have no common divisor if d were a common divisor then (m/d * h'(k)) mod m = (m *h'(k)/d)) mod m = 0 so m/d'th probe in the sequence would be the same as the first Simplest solu=on is to choose m to be prime Example: m = 32 h'(k) = 6 h(k) = 4 probe sequence is: 4, 10, 16, 22, 28, 2, 8, 14, 20, 26, 32, 6, 12, 18, 24, 30, 4,... common divisor is 2 32/2 = 16 unique table addresses generated.

21 Exponen=al double hashing H(k,i) = h(k) + (a**i)h (k) mod m a is a constant, m is prime, a is a primi=ve root of k If h(k) = k mod m and h (k) = k mod m2, then probe sequences will touch all of the entries in the table. The sequences will be more random than the simple double hashing method. Prac=cally, this hashing method works much be`er when the keys deviate from being uniformly distributed (like variable names in a program).

22 How caches affect hashing Modern computers have several levels of memory L1 cache, L2 cache, memory, disk. Performance of any program depends on how oxen its memory references hit the cache Otherwise you bring some block of memory down the memory hierarchy and push some block up. Takes =me. Which hashing method would you expect to generate the most cache hits? Intui=vely, linear hashing since it probes sequen=al loca=ons in memory!

23 How caches affect hashing But it DEPENDS on The load factor of the table (percentage of table loca=ons filled), and How much the keys inserted in the table deviate from being uniformly distributed in the key set. In fact, if the keys were uniformly distributed (which they never are) then linear hashing is best at almost all load factors. But as the key set deviates from uniformity and load factors get larger than 2030%, double hashing schemes work be`er, exponen=al the best. Heileman, Gregory L., and Wenbin Luo. "How Caching Affects Hashing." ALENEX/ANALCO

24 Test data for hashing Name Date of Death h(k) h'(k) J. Adams 7/4/ S. Adams 10/2/ J. Bartle` 5/19/ C. Braxton 10/10/ C. Carro l 11/14/ S. Chase 6/19/ A. Clark 9/15/ G. Clymer 1/23/ W. Elery 2/15/ W. Floyd 8/4/ B. Franklin 4/17/ E. Gerry 11/23/ B. Gwinnet 5/19/ L. Hall 10/19/ J. Hancock 10/8/ B. Harrison 4/24/ J. Hart 5/11/

25 Example Open addressing with double hashing 1 2 S. Adams C. Carrol S. Chase E. Gerry 2 J. Adams A. Clark 1 R. Ellery B. Harris 2 L. Hall J. Hanco 1 C. Braxto 1 J. Hart J. Bartlet 1 B. Frankl 2 G. Clymer W. Floyd 2 24 B. Gwinn 2

26 Brief review What is a good hash func=on? Should spread keys around the table randomly Should be efficient to compute (constant =me) Collisions are inevitable. How do we handle them? Separate chaining linked lists pointed to by hash table elements. Wasteful use space for links to build a bigger table to begin with. Implicit collision chains stored in the table Linear probing same rela=ve collision sequence for every key Double hashing different collision sequences for different keys

27 Next How can we make lookup more efficient? By ensuring that the keys encountered on a collision chain are weakly ordered then if we encounter a key on the collision chain that is larger than the query, we can terminate search with failure. This will cause unsuccessful lookups to fail early, but will not improve the search cost, on average, for successful lookups Since collision chains overlap, maybe we can move keys around on their collision chains to shorten them. This will reduce the search cost for successful lookups as well as reduce the search =me for unsuccessful ones (since the individual collision chains are mostly shorter)

28 Ordered hashing In the example, names were inserted in alphabe=cal order So, all of the chains constructed had their data in alphabe=cal order This would allow us to terminate an unsuccessful search whenever we encounter an element on a chain that follows the search key lexicographically.

29 Ordered hashing In separate chaining we can easily maintain the individual linked lists in alphabe=c order Can we do this with open addressing? No, but we can come close Property to be maintained: during the probe sequence for k, keys encountered before reaching k are smaller than k

30 Ordered hashing Trick: As the probe sequence for key k is followed during inser=on, if a key k' is encountered such that k < k', replace k' by k and proceed to insert k' as dictated by its probe sequence. Why does this create a hash table with the desired property? Proof by induc=on: 1) When the table is empty, the property holds trivially. 2) Table can be extended in one of two ways: a) pu}ng a key, which is greater than any of its predecessors (i.e., no interchanges) in a previously empty slot b) replace a key in the table by a smaller key but then any lookup that went through the k will go through k also since k < k.

31 Ordered hashing Obviously, the final state of the hash table for a given set of keys will be the same, regardless of the order in which the keys are inserted into the table! h(k) = index of first letter of k in the alphabet h'(k) = index of second letter of k Using only first 13 letters of alphabet ago 1 Insert: ago def ace dag abe

32 Example ago 1 def 1 ace 1 def 1 ace 1 dag 1 Insert: ago def ace dag abe 6 8 ago 2 ago 2 def

33 Example ace 1 dag 1 abe 1 ace 2 abe 1 ace 2 Insert: ago def ace dag abe ade dag 2 ade 2 6 dag 3 8 ago 2 ago 2 ago 2 def 2 def 2 def

34 Ordered hashing If we insert a set of keys into a hash table using ordered hashing, then the collision chains sa=sfy the following property: Any keys encountered on the collision chain for a key, k, will precede k lexicographically. The keys on the collision chain will NOT necessarily be sorted (in fact they typically will not be).

35 ba ab ba ab ba bb ab ba bb ad ab ba bb ad da ab ba bb ad bd da

36 Brent's method During inser=on process, rearrange keys so that collision lists are shorter, and subsequent searches are faster. Good when searches are more common than inser=ons, because inser=ons are expensive Called selforganizing double hashing Let p 0, p 1,..., p t be the sequence of table addresses generated by a double hashing func=on for some key k before encountering an empty table posi=on, p t. If we insert k at p t, then any subsequent search for k will require t+1 probes

37 Brent's method Now, we could replace any of the keys in loca=ons p 0, p 1,..., p t1 with k and shorten the search (in terms of number of probes) for k. Suppose we place k in p r But then we have to move the displaced key, k', in p r somewhere one place to put it is at the end of its chain if the savings in placing k at p r (tr) is greater than the extra work in subsequent searches for the key stored originally at p r (the length of the chain star=ng at p r ), then the switch will, overall, save search =me.

38 Simple example If we insert the new key here, and move the key stored there to here, then the total incremental probe cost is 5 Can insert new key here and cost is 7 probes How did the key in the blue box get there? Hashed to it directly (earlier), or the blue box is on its collision chain

39 Brent's method We will visualize the hash table as being a 2d grid The i'th column will be the collision chain star=ng at p i Note that this does not mean that the key stored at p i is the first element in its chain but the algorithm works in any case, because the extra number of probes needed to find the key at p i is the distance between p i and the end of the chain passing through p i. The j'th row will contain the address of the j'th element on each collision chain (or will be empty if some collision chain has fewer than j elements)

40 Brent's method p 0 p 1 p 2 p 3 p 4 p t p 0 +c 0 p 1 +c 1 p 2 +c 2 p 3 +c 3 p 4 +c 4 p 0 +2c 0 p 1 +2c 1 p 2 +2c 2 p 3 +2c 3 p 4 +2c 4 p 0 +3c 0 p 1 +3c 1 p 2 +3c 2 p 3 +3c 3 p 4 +3c 4 The columns are really of different lengths, ending at the first empty loca=on in each collision chain. Try to insert k as close to p 0 as possible Let L j be the length of the chain in column j. Find the chain that maximizes (tj) L j (savingsaddi=onal cost)

41 Brent's method p 0 p 1 p 2 p 3 p 4 p 5 1 (5,1) 3 (4,1) 6 (3,1) 10 (2,1) 15 (1,1) 2 (5,2) 5 (4,2) 9 (3,2) 14 (2,2) 4 (5,3) 8 (4,3) 13 (3,3) 7 (5,4) 12 (4,4) 11 (5,5)

42 GonnetMunro Brent s algorithm only moves keys on the probe sequence of the inser=on key to the end of their probe sequences GonnetMunro will consider moving keys that are on these probe sequences, and on those keys probe sequences, also. requires some systema=c way of searching through these collision chains

43 Gonnet Munro example p 0 =Tim Joan p 1 =Alan Ruth p 2 =Jay p 3 =Katy Kim p 4 = Ron Alex Rita Bob Joan Rita Jay Kim Ruth Tim Ron Kathy Alex Bob Fay Jill Kim Alan Fay Jay Jill

44 Gonnet Munro Suppose we a`empt to insert Rudy which takes us along the probe sequence p 0,..., p 4 Brent s algorithm would replace Alan with Rudy and would move Alan to the end of its probe sequence this saves 3 probes for Rudy and gives up 2 probes for Alan, for a savings of 1

45 Gonnet Munro But no=ce that we could replace Tim with Rudy saving 4 probes replace Joan with Tim cos=ng 1 move Joan one posi=on down her probe sequence cos=ng 1 for a total saving of 2, be`er than Brent

46 Gonnet Munro search tree Search is conducted using a binary tree constructed from the probe sequences lex most branch of the tree will be the original probe sequence p 0,..., p 4 right branches will represent probe sequences star=ng from these posi=ons... Each node in the binary tree will have fields for a Key value the Loca=on (LOC)in the hash table at which this key value is stored The secondary hash func=on, INC, for Key.

47 Gonnet Munro search tree The root of the tree has LOC = h(k), where k is the key to be inserted INC = h (k) Key = original key at posi=on LOC

48 Given any node, S, in the tree its lex son, L S, corresponds to LOC S + INC S LOC LS = LOC S + INC S INC LS = INC S Key LS = key stored at H[LOC S + INC S ] its right son, R S, is the next loca=on encountered in the probe sequence of Key S LOC RS = LOC S + h (Key S ) INC RS = h (Key S ) lex chain from R S will be probe sequence for Key S

49 The Tree Jay Alan Ruth Rudy, Tim h(rudy) = h(tim) + c(h (Tim)) h(alan) + c (h (Alan) = h(rudy) + h (Rudy) Ron Joan h(ron) + c (h (Ron))= h(joan) + [c +1] h (Joan) h(tim) + [c+1]h (Tim) = h(joan) + c (h (Joan)) Katy Rita Katy Tim Kim Jay Alan Ruth Alex Kim Bob Alex Fay Kim Jill Jay Tim Bob

50 The Algorithm Generate probe tree from top to bo`om and lex to right un=l encountering the first null posi=on in some hash chain. Keys will be moved along the path from the root to this empty posi=on using the following recursive step: Algorithm is at node N holding key K N trying to insert key K NEW. Move along any lex links in path to NULL node un=l encountering either NULL posi=on insert K NEW and halt Right pointer to N insert K NEW at father of N and insert original key at father of N in tree rooted at N.

51 Example Jay Alan Ruth Rudy, Tim h(rudy) = h(tim) + c(h (Tim)) h(alan) + c (h (Alan) = h(rudy) + h (Rudy) Ron Joan h(ron) + c (h (Ron))= h(joan) + [c +1] h (Joan) Katy Rita Katy Tim Kim Jay Alan Kim

52 Example Jay Alan Ruth Rudy h(rudy) = h(tim) + c(h (Tim)) h(alan) + c (h (Alan) = h(rudy) + h (Rudy) Ron Tim, Joan h(ron) + c (h (Ron))= h(joan) + [c +1] h (Joan) Katy Tim Rita Katy Kim Kim Jay Alan Algorithm is at node N holding key K N trying to insert key K NEW. Move along any left links in path to NULL node until encountering either NULL position insert K NEW and halt Right pointer to N insert K NEW at father of N and insert original key at father of N in tree rooted at N.

53 Example Rudy Jay Alan Ruth +1 h(rudy) = h(tim) + c(h (Tim)) h(alan) + c (h (Alan) = h(rudy) + h (Rudy) Ron Tim +1 h(ron) + c (h (Ron))= h(joan) + [c +1] h (Joan) Joan Katy Rita Katy Tim Kim Algorithm is at node N holding key K N trying to insert key K NEW. Move along any left links in path to NULL node until encountering either NULL position insert K NEW and halt Right pointer to N insert K NEW at father of N and insert original key at father of N in tree rooted at N. Jay Alan Kim

54 Example Jay Alan Ruth Rudy, Tim h(rudy) = h(tim) + c(h (Tim)) h(alan) + c (h (Alan) = h(rudy) + h (Rudy) Ron Joan h(ron) + c (h (Ron))= h(joan) + [c +1] h (Joan) Katy Rita Katy Tim Kim Jay Alan Kim

55 Example Jay Alan Ruth Rudy h(rudy) = h(tim) + c(h (Tim)) h(alan) + c (h (Alan) = h(rudy) + h (Rudy) Ron Insert Tim at tree rooted at Joan follow left links first Tim, Joan h(ron) + c (h (Ron))= h(joan) + [c +1] h (Joan) Katy Rita Katy Tim Kim Jay Alan Kim

56 Example Rudy Jay Alan Ruth h(rudy) = h(tim) + c(h (Tim)) h(alan) + c (h (Alan) = h(rudy) + h (Rudy) Ron Joan h(ron) + c (h (Ron))= h(joan) + [c +1] h (Joan) Katy Tim, Rita Katy Tim Kim Jay Alan Kim

57 Example Alan Rudy h(rudy) = h(tim) + c(h (Tim)) h(alan) + c (h (Alan) = h(rudy) + h (Rudy) Ron Joan Jay Ruth h(ron) + c (h (Ron))= h(joan) + [c +1] h (Joan) Katy Katy Tim Tim Kim Kim Rita, Jay Alan

58 Gonnet Munro not perfect Rob h(alan) = h(rob) + c (h (Rob)) Rudy Alan h(alan) + c (h (Alan) = h(rudy) + h (Rudy) Katy Jay Ruth Tim Could move Alan back one position on its collision chain and move Rob one position on its chain. Kim

59 Dele=on from hash tables Separate chaining easy, just remove the entry from the linked list Open addressing complicated since chains "overlap" i.e., posi=on of a key being deleted might be on the probe sequence of another key s=ll in the table. Example sequen=al probing insert key k in posi=on i insert colliding key k' in posi=on (i+1) mod m delete key k, making table entry i empty search for key k', but find empty slot at posi=on i.

60 Dele=on from hash tables Solu=on: Add a one bit deleted flag to each table entry. search proceeds over any entry with the deleted bit set to 1 inser=on occurs at the first posi=on that is empty or has dele=on bit set to 1. Problem: Searches become lengthy axer a large number of inser=ons and dele=ons have occurred Deleted bit can also be used with other collision resolu=on strategies, but they also suffer from degraded performance over =me. An expensive solu=on is to rehash the en=re table from scratch

61 Extendible hashing Main disadvantages of open addressing: 1) dele=ons cause performance to be degraded since table is clu`ered with "deleted" entries. 2) size of the table cannot be adjusted if the number of entries is larger than an=cipated Only solu=on is to allocate a new table and rehash the contents of the old table Extendible hashing 1) allows table to grow and shrink 2) Approach will extend to spa=al data

62 Extendible hashing Two level data structure: directory (memory) and a set of leaf pages (disk). Directory is a table of pointers to leaf pages Data records are stored in the leaf pages, which are of fixed size, b. Use a hash func=on that maps keys to bit strings of length L for d <= L, let H d (k) be the first d bits of H(k). This is called the dprefix of H(K). Example: L = 5, H(k) = 01010, then H 3 (k) = 010.

63 Extendible hashing A leaf page consists of all keys whose hash values have a par=cular prefix. The length of this prefix is called the depth of the page Maximum depth of any leaf page is called the depth of the table, D (=3 in the example) these are the hash values, not the keys

64 Extendible hashing Directory is a table, T, of length 2 D containing pointers to leaf pages. To locate page containing key k, compute H D (k) and follow the pointer in T[H D (k)]. If the depth of a leaf page is less than the depth of the table, several pointers will point to it. specifically, a leaf page of depth d will be pointed to by 2 Dd consecu=ve entries of T

65 Directory is a trie! Directory is the top level of a trie, discrimina=ng the first D bits of the hash value of a key. Individual leaf pages can be openaddressed hash tables, binary search trees, tries whatever you want

66 Inser=ng into a full leaf page trie spli}ng If we insert a key into a non full page, no problem. But if we try to insert into a full page, we must split the page. Split is accomplished by : 1) increasing the depth of the full page and crea=ng a "buddy" page (extend the trie one level below the overflowed bucket) 2) distribute the records to the two new pages appropriately (based on their d+1 prefixes) 3) May have to iterate this process if all keys fall in one of the buddies

67 Inser=on example Example: Insert key with hash value into table

68 Inser=on into extendible hash table What happens if we split a leaf of maximal depth? a) changes the depth of the table itself b) must double the size of the directory c) generally, entries 2i and 2i+1 of the new directory point to the same leaf page as entry i of the old directory, unless entry i was the one being split d) although if the directory is represented as a trie this complexity disappears all splits are treated the same

69 Example insert

70 Dele=on and extendible hashing Extendible hashing can accommodate dele=on Suppose entries 2i and 2i+1 point to dis=nct pages of maximal depth If dele=ng an entry from one of them causes their total size to be < b, then pages can be collapsed into a single page If they were the only leaf pages of maximal depth, then directory can be halved in size Prac=cally, don't always want to collapse two half full pages into one full page next opera=on may overflow that page.

71 Linear hashing Drawback of extendible hashing is that table must double to grow not a problem for small tables, but a disadvantage for large ones Linear hashing allows the table to grow one bucket at a =me table is extended when its load factor passes a cri=cal value buckets are split in sequence star=ng at 0 and ending at 2 n. two hashing func=ons are always ac=ve one for split buckets and one for unsplit when the last bucket (the 2 n th one) is split, the table has been doubled in size, and a new cycle of spli}ng is begun.

72 Linear hashing Suppose that our hash table currently contains m buckets, with 2 n <= m <= 2 n+1 1. Two hash func=ons are ac=ve h n (k) = k mod 2 n which is used to access buckets m2 n through 2 n 1. These are the buckets which have not been split during this cycle of linear hashing h n+1 (k) = kmod 2n+1 which is used to access buckets 0 through m 2 n 1 and buckets 2 n through m1. These are the buckets which have been split, and the buckets introduced as a result of spli}ng. Initially, m = 2 n and we have only one hash function as algorithm proceeds, buckets are split and we have two hash functions, one for the solid area and one for the empty area.

73 Linear hashing Each entry in the hash table stores the records that hash to it using a primary bucket, with some capacity, and a sequence of overflow buckets. The load factor for the table is the percentage of its storage being used for records. When we insert a record, we increase this factor. If it passes a cri=cal value, then we split one of the buckets, b. It s records are rehashed using h n+1. They are distributed into buckets b and b+ 2 n. This is because the hash func=on h n+1 looks at one more bit than h n. If that bit is 0, then the record is kept in loca=on b. If that bit is 1, then it will be moved to b + 2 n.

74 Linear hashing Algorithm maintains a pointer s, which points to the next bucket to be split. s starts at 0; when it reaches 2 n, we have split all of the buckets during the current cycle. At this point we increment n by 1, and reset s to 0. No=ce that the bucket split is not, generally, the bucket into which the last record was inserted! But, as records are inserted, we will eventually split all buckets. The hash table does not have to occupy con=guous memory we can insert one level of indirec=on with a directory, allowing the table to be kept in fragmented memory or on disk.

75 Example h 1 (k) = k mod 2 s = 0 Total capacity 3 * table size critical value =.5 Insert 8,7, Now, insert 5. Table passes critical value. Split 0. s < Insert 6. h 1 (6) = 0, so use h 2 h 2 (6) =

76 Example Insert 2 Table passes capacity Bucket 1 is split, and cycle is reset (s < 0)

77 Grid files extendible hashing in 2D Method for storing large numbers of points on external storage Data space is bounded by [x m, x M ] and [y m, y M ] and is called the grid space. at any =me it is par==oned into a set of rectangular blocks by a collec=on of grid lines Data points, themselves, are stored in buckets on disks. grid file contains one entry for each block in the current par==on of the data space, each entry in the grid file is a pointer to the bucket on disk where the data points contained within that par==on are stored. buckets have a fixed capacity overflowing requires spli}ng

78 Grid files If we have m horizontal and m ver=cal grid lines, then we have defined m 2 grid blocks. so, with 2000 grid lines (1000 ver=cal and 1000 horizontal) we define 1,000,000 grid blocks each a pointer to where we will find the address of the bucket (disk address) where the data points are actually stored. y M y m x m x M

79 Grid files So, need two accesses to get to the points themselves; the grid file is organized this way because many grid blocks can point to the same bucket. method assumes that the grid lines themselves can be stored in memory. If we can store the grid block directory in memory as well, then only one disk access is required. y M y m x m x M adding large point might require inserting the bold horizontal line don t want to split all of the buckets that this line crosses, requires reassigning points so, will have a many to one mapping from grid cells to buckets

80 Grid files Allow any rectangular set of grid blocks to point to the same bucket makes it easy to split and merge buckets as points are added to or deleted from the set. Determining the grid block in which a point (x,y) lies Maintain two 1d arrays storing the x and y values of the grid lines. Simple binary search of each array will yield the grid block (x,y ) coordinates of (x,y) Since directory size is known, can use coordinates of (x, y ) to directly compute the loca=on on disk in which this grid block is stored say, using lexicographic ordering.

81 Upda=ng grid files Ini=ally, there is only one grid block for the en=re data space Assume that bucket capacity is three points disk of buckets Disk directory of pointers to buckets

82 Upda=ng grid files

83 Upda=ng grid files What is involved in this update? Must read and update two directory entries on disk, which may or may not be on the same disk page Must update first bucket, and create second bucket of points another three disk accesses (R/W original bucket, W new bucket) add 2 more points

84 Upda=ng grid files a c b Bucket a is split into buckets a and c Grid block (2,2) points to bucket b, even though it contains no points from that block but every block must point to some bucket.

85 Upda=ng grid files Now suppose that we add two points to grid block (2,2) This cause the associated bucket, b, to overflow But we do not have to add more grid lines, because we had the many to one mapping of grid blocks to buckets a c b d

86 Upda=ng grid files a c b d a c b d

87 Upda=ng grid files Generally, once we have a large number of grid lines, inser=ons will only lead to bucket spli}ngs and not to the inser=on of new grid lines. This is important for efficiency, because when we introduce new grid lines we have to rewrite large parts of the directory and pointers to it.

88 Extendible hashing in 2D EXCELL All grid blocks of the same size Grid refinement for grid files splits one interval (either x or y) into two intervals For EXCELL, refinement splits all intervals in two along fixed par==on lines and doubles the directory like extendible hashing. Since all grid blocks are the same size, the linear scales used by the grid file are no longer needed. Can use the Morton code of the grid block to index into set of pointers to data buckets.

89 Example Bucket capacity is 2 records Grid directory implemented as an array, initially containing one element corresponding to the entire space Chicago Inserting Chicago and Mobile fills up bucket A Mobile A

90 Example Chicago Mobile Atlanta Inser=ng Toronto causes bucket A to overflow Must double the directory to include 2 elements Split bucket A and move Toronto and Mobile to bucket B A B

91 Example Atlanta Inser=ng Buffalo causes bucket B to overflow A Chicago C Buffalo Double the directory by spli}ng along y Mobile A B

92 Example D Chicago Denver Omaha Mobile Atlanta C Buffalo Denver can be placed into bucket A Omaha also belongs in bucket A bucket A can be split in two without changing the size of the directory crea=ng D However, all of the points are in A! A B

93 Example D D Atlanta C C Buffalo Split along the x a`ribute again doubling the directory Chicago Denver Omaha Mobile A E B B

94 Example Atlanta Insert Atlanta into bucket B OK Buffalo D D C C Chicago Denver Atlanta Omaha Mobile A E B B

95 Example D D Chicago Denver Omaha A E Mobile Atlanta C B C Buffalo Atlanta F Miami But inser=on of Miami into bucket B cause B to overflow Bucket B is split into B and F, but the directory does not have to be changed.

96 Example Atlanta 1,1 1,2 A D Buffalo 2,1 E D D C C 2,2 D Denver Chicago Atlanta 3,1 3,2 B C A Omaha E Mobile B F Miami 4,1 4,2 F C

### Introduction. hashing performs basic operations, such as insertion, better than other ADTs we ve seen so far

Chapter 5 Hashing 2 Introduction hashing performs basic operations, such as insertion, deletion, and finds in average time better than other ADTs we ve seen so far 3 Hashing a hash table is merely an hashing

### Chapter 5 Hashing. Introduction. Hashing. Hashing Functions. hashing performs basic operations, such as insertion,

Introduction Chapter 5 Hashing hashing performs basic operations, such as insertion, deletion, and finds in average time 2 Hashing a hash table is merely an of some fixed size hashing converts into locations

### Chapter 12: Indexing and Hashing. Basic Concepts

Chapter 12: Indexing and Hashing! Basic Concepts! Ordered Indices! B+-Tree Index Files! B-Tree Index Files! Static Hashing! Dynamic Hashing! Comparison of Ordered Indexing and Hashing! Index Definition

### Hashing Techniques. Material based on slides by George Bebis

Hashing Techniques Material based on slides by George Bebis https://www.cse.unr.edu/~bebis/cs477/lect/hashing.ppt The Search Problem Find items with keys matching a given search key Given an array A, containing

### Chapter 12: Indexing and Hashing

Chapter 12: Indexing and Hashing Basic Concepts Ordered Indices B+-Tree Index Files B-Tree Index Files Static Hashing Dynamic Hashing Comparison of Ordered Indexing and Hashing Index Definition in SQL

### Chapter 11: Indexing and Hashing

Chapter 11: Indexing and Hashing Basic Concepts Ordered Indices B + -Tree Index Files B-Tree Index Files Static Hashing Dynamic Hashing Comparison of Ordered Indexing and Hashing Index Definition in SQL

### CSc 120. Introduc/on to Computer Programming II. 15: Hashing

CSc 120 Introduc/on to Computer Programming II 15: Hashing Hashing 2 Searching We have seen two search algorithms: linear (sequen;al) search O(n) o the items are not sorted binary search O(log n) o the

### Hash Table and Hashing

Hash Table and Hashing The tree structures discussed so far assume that we can only work with the input keys by comparing them. No other operation is considered. In practice, it is often true that an input

### DATA STRUCTURES/UNIT 3

UNIT III SORTING AND SEARCHING 9 General Background Exchange sorts Selection and Tree Sorting Insertion Sorts Merge and Radix Sorts Basic Search Techniques Tree Searching General Search Trees- Hashing.

### Hashing. Hashing Procedures

Hashing Hashing Procedures Let us denote the set of all possible key values (i.e., the universe of keys) used in a dictionary application by U. Suppose an application requires a dictionary in which elements

### Fast Lookup: Hash tables

CSE 100: HASHING Operations: Find (key based look up) Insert Delete Fast Lookup: Hash tables Consider the 2-sum problem: Given an unsorted array of N integers, find all pairs of elements that sum to a

### Question Bank Subject: Advanced Data Structures Class: SE Computer

Question Bank Subject: Advanced Data Structures Class: SE Computer Question1: Write a non recursive pseudo code for post order traversal of binary tree Answer: Pseudo Code: 1. Push root into Stack_One.

### Algorithms and Data Structures

Algorithms and Data Structures Open Hashing Ulf Leser Open Hashing Open Hashing: Store all values inside hash table A General framework No collision: Business as usual Collision: Chose another index and

### CITS2200 Data Structures and Algorithms. Topic 15. Hash Tables

CITS2200 Data Structures and Algorithms Topic 15 Hash Tables Introduction to hashing basic ideas Hash functions properties, 2-universal functions, hashing non-integers Collision resolution bucketing and

### Open Addressing: Linear Probing (cont.)

Open Addressing: Linear Probing (cont.) Cons of Linear Probing () more complex insert, find, remove methods () primary clustering phenomenon items tend to cluster together in the bucket array, as clustering

### CMSC 341 Lecture 16/17 Hashing, Parts 1 & 2

CMSC 341 Lecture 16/17 Hashing, Parts 1 & 2 Prof. John Park Based on slides from previous iterations of this course Today s Topics Overview Uses and motivations of hash tables Major concerns with hash

### TABLES AND HASHING. Chapter 13

Data Structures Dr Ahmed Rafat Abas Computer Science Dept, Faculty of Computer and Information, Zagazig University arabas@zu.edu.eg http://www.arsaliem.faculty.zu.edu.eg/ TABLES AND HASHING Chapter 13

### Hash-Based Indexes. Chapter 11

Hash-Based Indexes Chapter 11 1 Introduction : Hash-based Indexes Best for equality selections. Cannot support range searches. Static and dynamic hashing techniques exist: Trade-offs similar to ISAM vs.

CSIT5300: Advanced Database Systems L08: B + -trees and Dynamic Hashing Dr. Kenneth LEUNG Department of Computer Science and Engineering The Hong Kong University of Science and Technology Hong Kong SAR,

### Chapter 12: Indexing and Hashing

Chapter 12: Indexing and Hashing Database System Concepts, 5th Ed. See www.db-book.com for conditions on re-use Chapter 12: Indexing and Hashing Basic Concepts Ordered Indices B + -Tree Index Files B-Tree

### Chapter 17. Disk Storage, Basic File Structures, and Hashing. Records. Blocking

Chapter 17 Disk Storage, Basic File Structures, and Hashing Records Fixed and variable length records Records contain fields which have values of a particular type (e.g., amount, date, time, age) Fields

### Symbol Table. Symbol table is used widely in many applications. dictionary is a kind of symbol table data dictionary is database management

Hashing Symbol Table Symbol table is used widely in many applications. dictionary is a kind of symbol table data dictionary is database management In general, the following operations are performed on

### 5. Hashing. 5.1 General Idea. 5.2 Hash Function. 5.3 Separate Chaining. 5.4 Open Addressing. 5.5 Rehashing. 5.6 Extendible Hashing. 5.

5. Hashing 5.1 General Idea 5.2 Hash Function 5.3 Separate Chaining 5.4 Open Addressing 5.5 Rehashing 5.6 Extendible Hashing Malek Mouhoub, CS340 Fall 2004 1 5. Hashing Sequential access : O(n). Binary

### Hashing. 1. Introduction. 2. Direct-address tables. CmSc 250 Introduction to Algorithms

Hashing CmSc 250 Introduction to Algorithms 1. Introduction Hashing is a method of storing elements in a table in a way that reduces the time for search. Elements are assumed to be records with several

### CS261 Data Structures. Maps (or Dic4onaries)

CS261 Data Structures Maps (or Dic4onaries) Goals Introduce the Map(or Dic4onary) ADT Introduce an implementa4on of the map with a Dynamic Array So Far. Emphasis on values themselves e.g. store names in

### HASH TABLES. Hash Tables Page 1

HASH TABLES TABLE OF CONTENTS 1. Introduction to Hashing 2. Java Implementation of Linear Probing 3. Maurer s Quadratic Probing 4. Double Hashing 5. Separate Chaining 6. Hash Functions 7. Alphanumeric

### Selection Queries. to answer a selection query (ssn=10) needs to traverse a full path.

Hashing B+-tree is perfect, but... Selection Queries to answer a selection query (ssn=) needs to traverse a full path. In practice, 3-4 block accesses (depending on the height of the tree, buffering) Any

### Chapter 11: Indexing and Hashing" Chapter 11: Indexing and Hashing"

Chapter 11: Indexing and Hashing" Database System Concepts, 6 th Ed.! Silberschatz, Korth and Sudarshan See www.db-book.com for conditions on re-use " Chapter 11: Indexing and Hashing" Basic Concepts!

### Hashed-Based Indexing

Topics Hashed-Based Indexing Linda Wu Static hashing Dynamic hashing Extendible Hashing Linear Hashing (CMPT 54 4-) Chapter CMPT 54 4- Static Hashing An index consists of buckets 0 ~ N-1 A bucket consists

### Database System Concepts, 6 th Ed. Silberschatz, Korth and Sudarshan See for conditions on re-use

Chapter 11: Indexing and Hashing Database System Concepts, 6 th Ed. See www.db-book.com for conditions on re-use Chapter 12: Indexing and Hashing Basic Concepts Ordered Indices B + -Tree Index Files Static

### Hash-Based Indexes. Chapter 11. Database Management Systems 3ed, R. Ramakrishnan and J. Gehrke 1

Hash-Based Indexes Chapter Database Management Systems 3ed, R. Ramakrishnan and J. Gehrke 1 Introduction As for any index, 3 alternatives for data entries k*: Data record with key value k

### Chapter 11: Indexing and Hashing

Chapter 11: Indexing and Hashing Database System Concepts, 6 th Ed. See www.db-book.com for conditions on re-use Chapter 11: Indexing and Hashing Basic Concepts Ordered Indices B + -Tree Index Files B-Tree

### AAL 217: DATA STRUCTURES

Chapter # 4: Hashing AAL 217: DATA STRUCTURES The implementation of hash tables is frequently called hashing. Hashing is a technique used for performing insertions, deletions, and finds in constant average

### Hash-Based Indexing 1

Hash-Based Indexing 1 Tree Indexing Summary Static and dynamic data structures ISAM and B+ trees Speed up both range and equality searches B+ trees very widely used in practice ISAM trees can be useful

### More B-trees, Hash Tables, etc. CS157B Chris Pollett Feb 21, 2005.

More B-trees, Hash Tables, etc. CS157B Chris Pollett Feb 21, 2005. Outline B-tree Domain of Application B-tree Operations Hash Tables on Disk Hash Table Operations Extensible Hash Tables Multidimensional

### General Idea. Key could be an integer, a string, etc e.g. a name or Id that is a part of a large employee structure

Hashing 1 Hash Tables We ll discuss the hash table ADT which supports only a subset of the operations allowed by binary search trees. The implementation of hash tables is called hashing. Hashing is a technique

### Hash Tables. Hashing Probing Separate Chaining Hash Function

Hash Tables Hashing Probing Separate Chaining Hash Function Introduction In Chapter 4 we saw: linear search O( n ) binary search O( log n ) Can we improve the search operation to achieve better than O(

### Systems Infrastructure for Data Science. Web Science Group Uni Freiburg WS 2014/15

Systems Infrastructure for Data Science Web Science Group Uni Freiburg WS 2014/15 Lecture II: Indexing Part I of this course Indexing 3 Database File Organization and Indexing Remember: Database tables

### Understand how to deal with collisions

Understand the basic structure of a hash table and its associated hash function Understand what makes a good (and a bad) hash function Understand how to deal with collisions Open addressing Separate chaining

### CS 5614: (Big) Data Management Systems. B. Aditya Prakash Lecture #5: Hashing and Sor5ng

CS 5614: (Big) Data Management Systems B. Aditya Prakash Lecture #5: Hashing and Sor5ng (Sta7c) Hashing Problem: find EMP record with ssn=123 What if disk space was free, and 5me was at premium? Prakash

### CMSC 341 Hashing (Continued) Based on slides from previous iterations of this course

CMSC 341 Hashing (Continued) Based on slides from previous iterations of this course Today s Topics Review Uses and motivations of hash tables Major concerns with hash tables Properties Hash function Hash

### Hashing file organization

Hashing file organization These slides are a modified version of the slides of the book Database System Concepts (Chapter 12), 5th Ed., McGraw-Hill, by Silberschatz, Korth and Sudarshan. Original slides

### Chapter 11: Indexing and Hashing

Chapter 11: Indexing and Hashing Database System Concepts, 6 th Ed. See www.db-book.com for conditions on re-use Chapter 11: Indexing and Hashing Basic Concepts Ordered Indices B + -Tree Index Files B-Tree

### Hashing for searching

Hashing for searching Consider searching a database of records on a given key. There are three standard techniques: Searching sequentially start at the first record and look at each record in turn until

### Hash Tables Outline. Definition Hash functions Open hashing Closed hashing. Efficiency. collision resolution techniques. EECS 268 Programming II 1

Hash Tables Outline Definition Hash functions Open hashing Closed hashing collision resolution techniques Efficiency EECS 268 Programming II 1 Overview Implementation style for the Table ADT that is good

### Lecture 8 Index (B+-Tree and Hash)

CompSci 516 Data Intensive Computing Systems Lecture 8 Index (B+-Tree and Hash) Instructor: Sudeepa Roy Duke CS, Fall 2017 CompSci 516: Database Systems 1 HW1 due tomorrow: Announcements Due on 09/21 (Thurs),

### key h(key) Hash Indexing Friday, April 09, 2004 Disadvantages of Sequential File Organization Must use an index and/or binary search to locate data

Lectures Desktop (C) Page 1 Hash Indexing Friday, April 09, 004 11:33 AM Disadvantages of Sequential File Organization Must use an index and/or binary search to locate data File organization based on hashing

### Hashing. Data organization in main memory or disk

Hashing Data organization in main memory or disk sequential, indexed sequential, binary trees, the location of a record depends on other keys unnecessary key comparisons to find a key The goal of hashing

### Hashing. Dr. Ronaldo Menezes Hugo Serrano. Ronaldo Menezes, Florida Tech

Hashing Dr. Ronaldo Menezes Hugo Serrano Agenda Motivation Prehash Hashing Hash Functions Collisions Separate Chaining Open Addressing Motivation Hash Table Its one of the most important data structures

### 2, 3, 5, 7, 11, 17, 19, 23, 29, 31

148 Chapter 12 Indexing and Hashing implementation may be by linking together fixed size buckets using overflow chains. Deletion is difficult with open hashing as all the buckets may have to inspected

### Hash Tables and Hash Functions

Hash Tables and Hash Functions We have seen that with a balanced binary tree we can guarantee worst-case time for insert, search and delete operations. Our challenge now is to try to improve on that...

### COMP171. Hashing.

COMP171 Hashing Hashing 2 Hashing Again, a (dynamic) set of elements in which we do search, insert, and delete Linear ones: lists, stacks, queues, Nonlinear ones: trees, graphs (relations between elements

### Fundamentals of Database Systems Prof. Arnab Bhattacharya Department of Computer Science and Engineering Indian Institute of Technology, Kanpur

Fundamentals of Database Systems Prof. Arnab Bhattacharya Department of Computer Science and Engineering Indian Institute of Technology, Kanpur Lecture - 18 Database Indexing: Hashing We will start on

### Lecture 16. Reading: Weiss Ch. 5 CSE 100, UCSD: LEC 16. Page 1 of 40

Lecture 16 Hashing Hash table and hash function design Hash functions for integers and strings Collision resolution strategies: linear probing, double hashing, random hashing, separate chaining Hash table

### Chapter 12: Indexing and Hashing (Cnt(

Chapter 12: Indexing and Hashing (Cnt( Cnt.) Basic Concepts Ordered Indices B+-Tree Index Files B-Tree Index Files Static Hashing Dynamic Hashing Comparison of Ordered Indexing and Hashing Index Definition

### Chapter 6. Hash-Based Indexing. Efficient Support for Equality Search. Architecture and Implementation of Database Systems Summer 2014

Chapter 6 Efficient Support for Equality Architecture and Implementation of Database Systems Summer 2014 (Split, Rehashing) Wilhelm-Schickard-Institut für Informatik Universität Tübingen 1 We now turn

### Announcements. Reading Material. Recap. Today 9/17/17. Storage (contd. from Lecture 6)

CompSci 16 Intensive Computing Systems Lecture 7 Storage and Index Instructor: Sudeepa Roy Announcements HW1 deadline this week: Due on 09/21 (Thurs), 11: pm, no late days Project proposal deadline: Preliminary

### Hashing. CptS 223 Advanced Data Structures. Larry Holder School of Electrical Engineering and Computer Science Washington State University

Hashing CptS 223 Advanced Data Structures Larry Holder School of Electrical Engineering and Computer Science Washington State University 1 Overview Hashing Technique supporting insertion, deletion and

### Hash Table. A hash function h maps keys of a given type into integers in a fixed interval [0,m-1]

Exercise # 8- Hash Tables Hash Tables Hash Function Uniform Hash Hash Table Direct Addressing A hash function h maps keys of a given type into integers in a fixed interval [0,m-1] 1 Pr h( key) i, where

### UNIT III BALANCED SEARCH TREES AND INDEXING

UNIT III BALANCED SEARCH TREES AND INDEXING OBJECTIVE The implementation of hash tables is frequently called hashing. Hashing is a technique used for performing insertions, deletions and finds in constant

### Database System Concepts, 5th Ed. Silberschatz, Korth and Sudarshan See for conditions on re-use

Chapter 12: Indexing and Hashing Database System Concepts, 5th Ed. See www.db-book.com for conditions on re-use Chapter 12: Indexing and Hashing Basic Concepts Ordered Indices B + -Tree Index Files B-Tree

### Indexing and Hashing

C H A P T E R 1 Indexing and Hashing This chapter covers indexing techniques ranging from the most basic one to highly specialized ones. Due to the extensive use of indices in database systems, this chapter

### Hash-Based Indexes. Chapter 11 Ramakrishnan & Gehrke (Sections ) CPSC 404, Laks V.S. Lakshmanan 1

Hash-Based Indexes Chapter 11 Ramakrishnan & Gehrke (Sections 11.1-11.4) CPSC 404, Laks V.S. Lakshmanan 1 What you will learn from this set of lectures Review of static hashing How to adjust hash structure

### Chapter 11: Indexing and Hashing

Chapter 11: Indexing and Hashing Database System Concepts, 6 th Ed. See www.db-book.com for conditions on re-use Chapter 12: Indexing and Hashing Basic Concepts Ordered Indices B + -Tree Index Files B-Tree

### ! A Hash Table is used to implement a set, ! The table uses a function that maps an. ! The function is called a hash function.

Hash Tables Chapter 20 CS 3358 Summer II 2013 Jill Seaman Sections 201, 202, 203, 204 (not 2042), 205 1 What are hash tables?! A Hash Table is used to implement a set, providing basic operations in constant

### Data and File Structures Chapter 11. Hashing

Data and File Structures Chapter 11 Hashing 1 Motivation Sequential Searching can be done in O(N) access time, meaning that the number of seeks grows in proportion to the size of the file. B-Trees improve

### HO #13 Fall 2015 Gary Chan. Hashing (N:12)

HO #13 Fall 2015 Gary Chan Hashing (N:12) Outline Motivation Hashing Algorithms and Improving the Hash Functions Collisions Strategies Open addressing and linear probing Separate chaining COMP2012H (Hashing)

### Module 2: Classical Algorithm Design Techniques

Module 2: Classical Algorithm Design Techniques Dr. Natarajan Meghanathan Associate Professor of Computer Science Jackson State University Jackson, MS 39217 E-mail: natarajan.meghanathan@jsums.edu Module

### CS 10: Problem solving via Object Oriented Programming Winter 2017

CS 10: Problem solving via Object Oriented Programming Winter 2017 Tim Pierson 260 (255) Sudikoff Day 11 Hashing Agenda 1. Hashing 2. CompuLng Hash funclons 3. Handling collisions 1. Chaining 2. Open Addressing

### Tables. The Table ADT is used when information needs to be stored and acessed via a key usually, but not always, a string. For example: Dictionaries

1: Tables Tables The Table ADT is used when information needs to be stored and acessed via a key usually, but not always, a string. For example: Dictionaries Symbol Tables Associative Arrays (eg in awk,

### Database design and implementation CMPSCI 645. Lecture 08: Storage and Indexing

Database design and implementation CMPSCI 645 Lecture 08: Storage and Indexing 1 Where is the data and how to get to it? DB 2 DBMS architecture Query Parser Query Rewriter Query Op=mizer Query Executor

### 4 Hash-Based Indexing

4 Hash-Based Indexing We now turn to a different family of index structures: hash indexes. Hash indexes are unbeatable when it comes to equality selections, e.g. SELECT FROM WHERE R A = k. If we carefully

### Multidimensional Data and Modelling (grid technique)

Multidimensional Data and Modelling (grid technique) 1 Grid file Increase of database usage and integrated information systems File structures => efficient access to records How? Combine attribute values

### Database index structures

Database index structures From: Database System Concepts, 6th edijon Avi Silberschatz, Henry Korth, S. Sudarshan McGraw- Hill Architectures for Massive DM D&K / UPSay 2015-2016 Ioana Manolescu 1 Chapter

### Multiway searching. In the worst case of searching a complete binary search tree, we can make log(n) page faults Everyone knows what a page fault is?

Multiway searching What do we do if the volume of data to be searched is too large to fit into main memory Search tree is stored on disk pages, and the pages required as comparisons proceed may not be

### Outline. hash tables hash functions open addressing chained hashing

Outline hash tables hash functions open addressing chained hashing 1 hashing hash browns: mixed-up bits of potatoes, cooked together hashing: slicing up and mixing together a hash function takes a larger,

### Indexing: Overview & Hashing. CS 377: Database Systems

Indexing: Overview & Hashing CS 377: Database Systems Recap: Data Storage Data items Records Memory DBMS Blocks blocks Files Different ways to organize files for better performance Disk Motivation for

### Access Methods. Basic Concepts. Index Evaluation Metrics. search key pointer. record. value. Value

Access Methods This is a modified version of Prof. Hector Garcia Molina s slides. All copy rights belong to the original author. Basic Concepts search key pointer Value record? value Search Key - set of

### Worst-case running time for RANDOMIZED-SELECT

Worst-case running time for RANDOMIZED-SELECT is ), even to nd the minimum The algorithm has a linear expected running time, though, and because it is randomized, no particular input elicits the worst-case

### Introduction to Hashing

Lecture 11 Hashing Introduction to Hashing We have learned that the run-time of the most efficient search in a sorted list can be performed in order O(lg 2 n) and that the most efficient sort by key comparison

### Hashing HASHING HOW? Ordered Operations. Typical Hash Function. Why not discard other data structures?

HASHING By, Durgesh B Garikipati Ishrath Munir Hashing Sorting was putting things in nice neat order Hashing is the opposite Put things in a random order Algorithmically determine position of any given

### CS 350 Algorithms and Complexity

CS 350 Algorithms and Complexity Winter 2019 Lecture 12: Space & Time Tradeoffs. Part 2: Hashing & B-Trees Andrew P. Black Department of Computer Science Portland State University Space-for-time tradeoffs

### HashTable CISC5835, Computer Algorithms CIS, Fordham Univ. Instructor: X. Zhang Fall 2018

HashTable CISC5835, Computer Algorithms CIS, Fordham Univ. Instructor: X. Zhang Fall 2018 Acknowledgement The set of slides have used materials from the following resources Slides for textbook by Dr. Y.

### Cpt S 223. School of EECS, WSU

Hashing & Hash Tables 1 Overview Hash Table Data Structure : Purpose To support insertion, deletion and search in average-case constant t time Assumption: Order of elements irrelevant ==> data structure

### Hashing. Introduction to Data Structures Kyuseok Shim SoEECS, SNU.

Hashing Introduction to Data Structures Kyuseok Shim SoEECS, SNU. 1 8.1 INTRODUCTION Binary search tree (Chapter 5) GET, INSERT, DELETE O(n) Balanced binary search tree (Chapter 10) GET, INSERT, DELETE

### Teach A level Compu/ng: Algorithms and Data Structures

Teach A level Compu/ng: Algorithms and Data Structures Eliot Williams @MrEliotWilliams Course Outline Representa+ons of data structures: Arrays, tuples, Stacks, Queues,Lists 2 Recursive Algorithms 3 Searching

### Part I Anton Gerdelan

Hash Part I Anton Gerdelan Review labs - pointers and pointers to pointers memory allocation easier if you remember char* and char** are just basic ptr variables that hold an address

### TUTORIAL ON INDEXING PART 2: HASH-BASED INDEXING

CSD Univ. of Crete Fall 07 TUTORIAL ON INDEXING PART : HASH-BASED INDEXING CSD Univ. of Crete Fall 07 Hashing Buckets: Set up an area to keep the records: Primary area Divide primary area into buckets

### Indexing. Week 14, Spring Edited by M. Naci Akkøk, , Contains slides from 8-9. April 2002 by Hector Garcia-Molina, Vera Goebel

Indexing Week 14, Spring 2005 Edited by M. Naci Akkøk, 5.3.2004, 3.3.2005 Contains slides from 8-9. April 2002 by Hector Garcia-Molina, Vera Goebel Overview Conventional indexes B-trees Hashing schemes

### Module 5: Hashing. CS Data Structures and Data Management. Reza Dorrigiv, Daniel Roche. School of Computer Science, University of Waterloo

Module 5: Hashing CS 240 - Data Structures and Data Management Reza Dorrigiv, Daniel Roche School of Computer Science, University of Waterloo Winter 2010 Reza Dorrigiv, Daniel Roche (CS, UW) CS240 - Module

### CSE373: Data Structures & Algorithms Lecture 17: Hash Collisions. Kevin Quinn Fall 2015

CSE373: Data Structures & Algorithms Lecture 17: Hash Collisions Kevin Quinn Fall 2015 Hash Tables: Review Aim for constant-time (i.e., O(1)) find, insert, and delete On average under some reasonable assumptions

### Tirgul 7. Hash Tables. In a hash table, we allocate an array of size m, which is much smaller than U (the set of keys).

Tirgul 7 Find an efficient implementation of a dynamic collection of elements with unique keys Supported Operations: Insert, Search and Delete. The keys belong to a universal group of keys, U = {1... M}.

### Hash[ string key ] ==> integer value

Hashing 1 Overview Hash[ string key ] ==> integer value Hash Table Data Structure : Use-case To support insertion, deletion and search in average-case constant time Assumption: Order of elements irrelevant

### Hash Tables. Gunnar Gotshalks. Maps 1

Hash Tables Maps 1 Definition A hash table has the following components» An array called a table of size N» A mathematical function called a hash function that maps keys to valid array indices hash_function:

### 9/24/ Hash functions

11.3 Hash functions A good hash function satis es (approximately) the assumption of SUH: each key is equally likely to hash to any of the slots, independently of the other keys We typically have no way

### Introduction hashing: a technique used for storing and retrieving information as quickly as possible.

Lecture IX: Hashing Introduction hashing: a technique used for storing and retrieving information as quickly as possible. used to perform optimal searches and is useful in implementing symbol tables. Why