Algorithms and Data Structures

Similar documents
Lecture 16. Reading: Weiss Ch. 5 CSE 100, UCSD: LEC 16. Page 1 of 40

Algorithms and Data Structures

Algorithms and Data Structures

Hash Tables. Johns Hopkins Department of Computer Science Course : Data Structures, Professor: Greg Hager

1 CSE 100: HASH TABLES

1 Probability Review. CS 124 Section #8 Hashing, Skip Lists 3/20/17. Expectation (weighted average): the expectation of a random quantity X is:

Introduction hashing: a technique used for storing and retrieving information as quickly as possible.

Understand how to deal with collisions

Tirgul 7. Hash Tables. In a hash table, we allocate an array of size m, which is much smaller than U (the set of keys).

Algorithms and Data Structures

TABLES AND HASHING. Chapter 13

Standard ADTs. Lecture 19 CS2110 Summer 2009

The dictionary problem

Practice Midterm Exam Solutions

CS1020 Data Structures and Algorithms I Lecture Note #15. Hashing. For efficient look-up in a table

Fast Lookup: Hash tables

1. [1 pt] What is the solution to the recurrence T(n) = 2T(n-1) + 1, T(1) = 1

Hashing Techniques. Material based on slides by George Bebis

CS2210 Data Structures and Algorithms

Programming Abstractions

Hashing. Manolis Koubarakis. Data Structures and Programming Techniques

Programming Abstractions

Algorithms and Data Structures

CSE 332: Data Structures & Parallelism Lecture 10:Hashing. Ruth Anderson Autumn 2018

Algorithms and Data Structures

CMSC 341 Lecture 16/17 Hashing, Parts 1 & 2

Introducing Hashing. Chapter 21. Copyright 2012 by Pearson Education, Inc. All rights reserved

Datenbanksysteme II: Caching and File Structures. Ulf Leser

HASH TABLES.

Hash Tables Hash Tables Goodrich, Tamassia

CS 241 Analysis of Algorithms

DATA STRUCTURES AND ALGORITHMS

This lecture. Iterators ( 5.4) Maps. Maps. The Map ADT ( 8.1) Comparison to java.util.map

AAL 217: DATA STRUCTURES

CSE 373: Data Structures and Algorithms

Chapter 6. Hash-Based Indexing. Efficient Support for Equality Search. Architecture and Implementation of Database Systems Summer 2014

III Data Structures. Dynamic sets

CSE 332 Spring 2013: Midterm Exam (closed book, closed notes, no calculators)

Hashing. Hashing Procedures

HASH TABLES. Goal is to store elements k,v at index i = h k

CSE100. Advanced Data Structures. Lecture 21. (Based on Paul Kube course materials)

Algorithm Efficiency & Sorting. Algorithm efficiency Big-O notation Searching algorithms Sorting algorithms

DATA STRUCTURES/UNIT 3

Hash table basics mod 83 ate. ate. hashcode()

COSC160: Data Structures Hashing Structures. Jeremy Bolton, PhD Assistant Teaching Professor

4 Hash-Based Indexing

File Structures and Indexing

Algorithms and Data Structures

Advanced Algorithmics (6EAP) MTAT Hashing. Jaak Vilo 2016 Fall

Lecture 6: Hashing Steven Skiena

CMSC 341 Hashing (Continued) Based on slides from previous iterations of this course

Open Addressing: Linear Probing (cont.)

Introduction to. Algorithms. Lecture 6. Prof. Constantinos Daskalakis. CLRS: Chapter 17 and 32.2.

Algorithms and Data Structures

COMP171. Hashing.

Hash table basics. ate à. à à mod à 83

HASH TABLES. CSE 332 Data Abstractions: B Trees and Hash Tables Make a Complete Breakfast. An Ideal Hash Functions.

Module 5: Hashing. CS Data Structures and Data Management. Reza Dorrigiv, Daniel Roche. School of Computer Science, University of Waterloo

5. Hashing. 5.1 General Idea. 5.2 Hash Function. 5.3 Separate Chaining. 5.4 Open Addressing. 5.5 Rehashing. 5.6 Extendible Hashing. 5.

Algorithm Efficiency & Sorting. Algorithm efficiency Big-O notation Searching algorithms Sorting algorithms

Hash Table. A hash function h maps keys of a given type into integers in a fixed interval [0,m-1]

CMSC 341 Hashing. Based on slides from previous iterations of this course

Hash table basics mod 83 ate. ate

Hashing. 1. Introduction. 2. Direct-address tables. CmSc 250 Introduction to Algorithms

Hash Table and Hashing

CSE 332 Autumn 2013: Midterm Exam (closed book, closed notes, no calculators)

CSE 100: UNION-FIND HASH

Carnegie Mellon Univ. Dept. of Computer Science /615 DB Applications. Outline. (Static) Hashing. Faloutsos - Pavlo CMU SCS /615

CSE373: Data Structures & Algorithms Lecture 17: Hash Collisions. Kevin Quinn Fall 2015

Priority Queue Sorting

Hash tables. hashing -- idea collision resolution. hash function Java hashcode() for HashMap and HashSet big-o time bounds applications

4.1 COMPUTATIONAL THINKING AND PROBLEM-SOLVING

Advanced Database Systems

CHAPTER 9 HASH TABLES, MAPS, AND SKIP LISTS

Section 05: Solutions

Data and File Structures Chapter 11. Hashing

B+ Tree Review. CSE332: Data Abstractions Lecture 10: More B Trees; Hashing. Can do a little better with insert. Adoption for insert

Text Analytics. Index-Structures for Information Retrieval. Ulf Leser

Hash Open Indexing. Data Structures and Algorithms CSE 373 SP 18 - KASEY CHAMPION 1

Lecture 7: Efficient Collections via Hashing

Chapter 12: Indexing and Hashing. Basic Concepts

key h(key) Hash Indexing Friday, April 09, 2004 Disadvantages of Sequential File Organization Must use an index and/or binary search to locate data

Section 05: Solutions

CSE 373: Data Structures and Algorithms

Hash Tables. Lower bounds on sorting Hash functions. March 05, 2018 Cinda Heeren / Geoffrey Tien 1

INSTITUTE OF AERONAUTICAL ENGINEERING

Hashing. Dr. Ronaldo Menezes Hugo Serrano. Ronaldo Menezes, Florida Tech

Abstract Data Types (ADTs) Queues & Priority Queues. Sets. Dictionaries. Stacks 6/15/2011

Hash-Based Indexing 1

Chapter 12: Indexing and Hashing

CS 270 Algorithms. Oliver Kullmann. Generalising arrays. Direct addressing. Hashing in general. Hashing through chaining. Reading from CLRS for week 7

CITS2200 Data Structures and Algorithms. Topic 15. Hash Tables

STANDARD ADTS Lecture 17 CS2110 Spring 2013

CS 350 Algorithms and Complexity

Lecture 18. Collision Resolution

Hash Tables Outline. Definition Hash functions Open hashing Closed hashing. Efficiency. collision resolution techniques. EECS 268 Programming II 1

Hash-Based Indexes. Chapter 11

Algorithms with numbers (2) CISC4080, Computer Algorithms CIS, Fordham Univ.! Instructor: X. Zhang Spring 2017

Algorithms with numbers (2) CISC4080, Computer Algorithms CIS, Fordham Univ. Acknowledgement. Support for Dictionary

CSE 332 Winter 2015: Midterm Exam (closed book, closed notes, no calculators)

Transcription:

Algorithms and Data Structures (Overflow) Hashing Marius Kloft

This Module Introduction 2 Abstract Data Types 1 Complexity analysis 1 Styles of algorithms 1 Lists, stacks, queues 2 Sorting (lists) 3 Searching (in lists, PQs, SOL) 5 Hashing (to manage lists) 2 Trees (to manage lists) 4 Graphs (no lists!) 5 Sum 15/26 Marius Kloft: Alg&DS, Summer Semester 2016 2

How fast can we Search Elements? Searching by Key Inserting Pre-processing Unsorted array O(n) O(1) 0 Sorted array O(log(n)) O(n) O(n*log(n)) Sorted linked list O(n) O(n) O(n*log(n)) Priority Queue O(1) for min, O(log(n)) all others O(log(n)) O(n) Marius Kloft: Alg&DS, Summer Semester 2016 3

Beyond log(n) in Searching Assume you have a company and ~2000 employees You often search employees by name to get their ID No employee is more important than any other No differences in access frequencies, self-organizing lists don t help Best we can do until now Sort list in array Binsearch will require log(n)~13 comparisons per search Can t we do better? Marius Kloft: Alg&DS, Summer Semester 2016 4

Recall Bucket Sort Bucket Sort Assume A =n, m being the length of the longest value in A, values from A over an alphabet with =k We first sort A on first position into k buckets Then sort every bucket again for second position Etc. After at most m iterations, we are done Time complexity: O(m* A ) Fundamental idea: For finite alphabets, the characters give us a partitioning of all possible values in linear time such that the partitions are sorted in the right order Marius Kloft: Alg&DS, Summer Semester 2016 5

Bucket Sort Idea for Searching Fix an m (e.g. m=3) There are only 26 3 ~18.000 different prefixes of length 3 that a (German) name can start with (ignoring case) Thus, we can sort a name s with prefix s[1..m] in constant time into an array A with A =k m Index in A: A[(s[1]-1)*k 0 + (s[2]-1)*k 1 + +(s[m]-1)*k m-1 ] We can use the same formula to look-up names in O(1) Cool. Search complexity is O(1) Marius Kloft: Alg&DS, Summer Semester 2016 6

Key Idea of Hashing Given a list S of S =n values and an array A, A =a Define a hash function h: S [0,a-1] Store each value s S in A[h(s)] To test whether a value q is in S, check if A[h(q)] null Inserting and lookup is O(1) But wait Marius Kloft: Alg&DS, Summer Semester 2016 7

Collisions Assume h maps to the m first characters <Müller, Peter>, <Müller, Hans>, <Müllheim, Ursula>, All start with the same 4-prefix All are mapped to the same position of A if m<5 These cases are called collisions To minimize collisions, we can increase m Requires exponentially more space (a= Σ m ) But we have only 2000 employees what a waste Can t we find better ways to map a name into an array? What are good hash functions? Marius Kloft: Alg&DS, Summer Semester 2016 8

Example: Dictionary Problem Dictionary problem: Manage a list S of S keys We use an array A with A =a (usually a>>n) We want to support three operations Store a key k in A Look-up a key in A Delete a key from A Applications Compilers: Symbol tables over variables, function names, Databases: Lists of objects such as names, ages, incomes, Search engines: Lists of words appearing in documents Marius Kloft: Alg&DS, Summer Semester 2016 9

Content of this Lecture Hashing Collisions External Collision Handling Hash Functions Application: Bloom Filter Marius Kloft: Alg&DS, Summer Semester 2016 10

Hash Function Definition Let S, S =n, be a set of keys from a universe U and A a set of target values with a= A A hash function h is a total function h: U A Every pair k 1, k 2 S with k 1 k 2 and h(k 1 )=h(k 2 ) is called a collision h is perfect if it never produces collisions h is uniform, if i A: p(h(k)=i)=1/a h is order-preserving, iff: k 1 <k 2 => h(k 1 )<h(k 2 ) We always use A={0,,a-1} Because we want to use h(k) as address for storing k in an array Marius Kloft: Alg&DS, Summer Semester 2016 11

Illustration U: All possible values of k All a addresses of hash table A Marius Kloft: Alg&DS, Summer Semester 2016 12

Illustration Actual values of k in S Hash table A with collisions Marius Kloft: Alg&DS, Summer Semester 2016 13

Illustration Hash table A Local cluster resolved Marius Kloft: Alg&DS, Summer Semester 2016 14

Topics We want hash functions with as few collisions as possible Knowing U and making assumptions about S Hash functions should be computed quickly Bad idea: Sort S and then use rank as address Collisions must be handled Even if a collision occurs, we still need to give correct answers Don t waste space: A should be as small as possible It must hold that a n if collisions must be avoided Note: Order-preserving hash functions are rare Hashing is bad for range queries Marius Kloft: Alg&DS, Summer Semester 2016 15

Example We usually have a>> S yet a<< U If k is a number (or can be turned into a number): A simple and surprisingly good hash function: h(k) := k mod a for a= A being a prime number Marius Kloft: Alg&DS, Summer Semester 2016 16

Content of this Lecture Hashing Collisions External Collision Handling Hash Functions Application: Bloom Filter Marius Kloft: Alg&DS, Summer Semester 2016 17

Are Collisions a Problem? Assume we have a (uniform) hash function that maps an arbitrarily chosen key k to all positions in A with equal probability Given S =n and A =a how big are the chances to produce collisions? Marius Kloft: Alg&DS, Summer Semester 2016 18

Two Cakes a Day? Each Übungsgruppe at the moment has ~32 persons Every time one has birthday, he/she brings a cake What is the chance of having to eat two piece of cake on one day? Birthday paradox Obviously, there are 365 chances to eat two pieces Each day has the same chance to be a birthday for every person We ignore seasonal bias, twins, etc. Guess 5% 20% 30% 50%? Marius Kloft: Alg&DS, Summer Semester 2016 19

Analysis Abstract formulation: Urn with 365 balls We draw 32 times and place the ball back after every drawing What is the probability p(32, 365) to draw any ball at least twice? Complement of the chance to draw no ball more than once p(32, 365) = 1 q(32,365) q(x,y): We only draw different balls We draw a first ball. Then Chance that the second is different from all previous balls: 364/365 Chance that the 3 rd is different from 1st and 2 nd (which must be different) is 363/365 p( n, a) 1 q( n, a) 1 n a i 1 1 i 1 a ( a a! n)!* a n Marius Kloft: Alg&DS, Summer Semester 2016 20

Results Source: Wikipedia 5 2,71 10 11,69 15 25,29 20 41,14 25 56,87 30 70,63 32 75,33 40 89,12 50 97,04 p(n) here means p(n,365) q(n): Chance that someone has birthday on the same day as you Marius Kloft: Alg&DS, Summer Semester 2016 21

Take-home Messages Collision handling is a real issue Just by chance, there are many more collisions than one intuitively expects Additional time/space it takes to manages collisions must be taken into account Marius Kloft: Alg&DS, Summer Semester 2016 22

Content of this Lecture Hashing Collisions External Collision Handling Hash Functions Application: Bloom Filter Marius Kloft: Alg&DS, Summer Semester 2016 23

Hashing: Three Fundamental Methods Overflow hashing: Collisions are stored outside A We need additional storage Solves the problem of A having a fixed size (despite S might be growing) without changing A Open hashing: Collisions are managed inside A No additional storage A is upper bound to the amount of data that can be stored Next lecture Dynamic hashing: A may grow/shrink Not covered here see Databases II Marius Kloft: Alg&DS, Summer Semester 2016 24

Collision Handling In Overflow Hashing, we store values not fitting into A in separate data structures (lists) Two possibilities Separate chaining: A[i] stores tuple (key, p), where p is a pointer to a list storing all keys except the first one mapped to i Good if collisions are rare; if keys are small Direct chaining: A[i] is a pointer to list storing all keys mapped to i Less if then else ; more efficient if collisions are frequent; if keys are large Marius Kloft: Alg&DS, Summer Semester 2016 25

Example, Direct Chaining (h(k)= k mod 7) 5 15 3 7 8 7 15 3 5 Assume a linked list, insertions at list head Marius Kloft: Alg&DS, Summer Semester 2016 26

Example (h(k)= k mod 7) 5 15 3 7 8 4 12 7 8 3 4 5 15 Assume a linked list, insertions at list head Marius Kloft: Alg&DS, Summer Semester 2016 27

Example (h(k)= k mod 7) 5 15 7 8 10 4 19 3 7 8 15 3 12 5 4 12 19 10 Assume a linked list, insertions at list head WC-Space: O(a+n) Time (worst-case) Insert: O(1) Search: O(n) worst case, all keys map to the same cell Delete: O(n) we first need to search Marius Kloft: Alg&DS, Summer Semester 2016 28

Average Case Complexities Assume h uniform After having inserted n values, every overflow list has ~n/a elements is also called the fill degree of the hash table How long does the n+1 st operation take on average? Insert: O(1) Search: If k L: /2 comparisons; else comparisons Delete: Same as search Marius Kloft: Alg&DS, Summer Semester 2016 29

Improvement We may keep every overflow list sorted If stored in a (dynamic) array, binsearch requires log( ) If stored in a linked list, searching k (k L or k L) requires /2 Disadvantage: Insert requires /2 to keep list sorted If we first have many inserts (build-phase of a dictionary), then mostly searches, it is better to first build unsorted overflows and only once sort overflow lists when changing phase We may use a second (smaller) hash table with a different hash function Especially if some overflow lists grow very large See Double Hashing (next lecture) Marius Kloft: Alg&DS, Summer Semester 2016 30

But Searching with ~ /2 comparisons on average doesn t seem very attractive But: One typically uses hashing in cases where is small Usually, <1 search on average takes only constant time 1 10 search takes only ~5 comparisons For instance, let S =n=10.000.000 and a=1.000.000 Hash table (uniform): ~5 comparisons Binsearch: log(1e7)~23 comparisons But: In many situations values in S are highly skewed; average case estimation may go grossly wrong Experiments help Marius Kloft: Alg&DS, Summer Semester 2016 31

Content of this Lecture Hashing Collisions External Collision Handling Hash Functions Application: Bloom Filter Marius Kloft: Alg&DS, Summer Semester 2016 32

Hash Functions Requirements Should be computed quickly Should spread keys equally over A even if local clusters exist Should use all positions in A with equal probability (uniformity) Simple and good: h(k) := k mod a Division-rest method If a is prime: Few collisions for many real world data (empirical observation) Marius Kloft: Alg&DS, Summer Semester 2016 33

Why Prime? We want hash functions that use the entire key Empirical observation from many examples Assume division-rest method Often keys have an internal structure key= leftstr(firstname,3)+leftstr(lastname, 3)+year(birthday)+gender Think of representation of k as bitstring If a is even, then h(k) is even iff k is even Males get 50%, females get 50% of A no adaptation If a=2 i, h(k) only uses last i bits of any key Which usually are not equally distributed a being prime is often a good idea Marius Kloft: Alg&DS, Summer Semester 2016 34

Other Hash Functions Multiplikative Methode : h(k) = floor(a*(k*x floor(k*x))) Multiply k with x, remove the integer part, multiply with a and cut to the next smaller integer value x: any real number; best distribution on average for x=(1+ 5)/2 - Goldener Schnitt Quersumme : h(k) = (k mod 10) + For strings: h(k) = (f(k) mod a) with f(k)= add byte values of all characters in k No limits to fantasy Look at your data and its distribution of values Make sure local clusters are resolved Marius Kloft: Alg&DS, Summer Semester 2016 35

Java hashcode() 1. /** * Returns a hash code for this string. The hash code for a 2. * <code>string</code> object is computed as 3. * <blockquote><pre> 4. * s[0]*31^(n-1) + s[1]*31^(n-2) +... + s[n-1] 5. * </pre></blockquote> 6. * using <code>int</code> arithmetic, where <code>s[i]</code> is the 7. * <i>i</i>th character of the string, <code>n</code> is the length of 8. * the string, and <code>^</code> indicates exponentiation. 9. * (The hash value of the empty string is zero.) * Object.hashCode() The default hashcode() method uses the 32-bit internal JVM address of the Object as its hashcode. However, if the Object is moved in memory during garbage collection, the hashcode stays constant. This default hashcode is not very useful, since to look up an Object in a HashMap, you need the exact same key Object by which the key/value pair was originally filed. Normally, when you go to look up, you don t have the original key Object itself, just some data for a key. So, unless your key is a String, nearly always you will need to implement a hashcode and equals() method on your key class. Marius Kloft: Alg&DS, Summer Semester 2016 36

Hashing Two key ideas to achieve scalability for relatively simple problems on very large datasets: Sorting / Hashing Foodnetwork.com Marius Kloft: Alg&DS, Summer Semester 2016 37

Pros / Cons Sorting Hashing Typically O(log(n)) for searching in WC and AC Requires sorting first (which can be reused) App/domain independent method No additional space Efficient for extensible DSs Sometimes preferable Typically O(1) AC, but worst case O(n) No preparatory work App/domain specific hash functions and strategies Usually add. space required Extensibility is difficult Sometimes preferable Marius Kloft: Alg&DS, Summer Semester 2016 38

Content of this Lecture Hashing Collisions External Collision Handling Hash Functions Application: Bloom Filter Marius Kloft: Alg&DS, Summer Semester 2016 39

Searching an Element Assume we want to know if k is an element of a list S of 32bit integers but S is very large We shall from now on count in keys = 32bit S must be stored on disk Assume testing k in memory costs very little, but loading a block (size b=1000 keys) from disk costs enormously more Thus, we only count IO how many blocks do we need to load? Assume S =1E9 (1E6 blocks) and we have enough memory for 1E6 keys Thus, enough for 1000 of the 1 Million blocks Marius Kloft: Alg&DS, Summer Semester 2016 40

Options If S is not sorted If k S, we need to load 50% of S on average: ~ 0.5E6 IO If k S, we need to load S entirely: ~ 1E6 IO If S is sorted It doesn t matter whether k S or not We need to load log( S /b)=log(1e6)~20 blocks Notice that we are not using our memory Marius Kloft: Alg&DS, Summer Semester 2016 41

Idea of a Bloom Filter Build a hash map A as big as the memory Use A to indicate whether a key is in S or not The test may fail, but only in one direction If k A, we don t know for sure if k S If k A, we know for sure that k S A acts as a filter: A Bloom filter Bloom, B. H. (1970). "Space/Time Trade-offs in Hash Coding with Allowable Errors." Communications of the ACM 13(7): 422-426. Marius Kloft: Alg&DS, Summer Semester 2016 42

Bloom Filter: Simple Create a bitarray A with A =a=1e6*32 We fully exploit our memory A is always kept in memory Choose a uniform hash function h Initialize A (offline): k S: A[h(k)]=1 Searching k given A (online) Test A[h(k)] in memory If A[h(k)]=0, we know that k S (with 0 IO) If A[h(k)]=1, we need to search k in S Marius Kloft: Alg&DS, Summer Semester 2016 43

Bloom Filter: Advanced Create a bitarray A with A =a=1e6*32 We fully exploit our memory A is always kept in memory Choose j independent uniform hash functions h j Independent: The values of one hash function are statistically independent of the values of all other hash functions Initialize A (offline): k S, j: A[h j (k)]=1 Searching k given A (online) j: Test A[h j (k)] in memory If any of the A[h j (k)]=0, we know that k S If all A[h j (k)]=1, we need to search k in S Marius Kloft: Alg&DS, Summer Semester 2016 44

Analysis Assume k S Let denote C n the cost of such a (negative) search We only access disk if all A[h j (k)]=1 by chance how often? In all other cases, we perform no IO and assume 0 cost Assume k S We will certainly access disk, as all A[h j (k)]=1 but we don t know if this is by chance of not Thus, C p = 20 Using binsearch, assuming S is kept sorted on disk Marius Kloft: Alg&DS, Summer Semester 2016 45

Chances for a False Positive For one k S and one hash function, the chance for a given position in A to be 0 is 1-1/a For j hash functions, chance that all remain 0 is (1-1/a) j For j hash functions and n values, the chance to remain 0 is q=(1-1/a) j*n Prob. of a given bit being 1 after inserting n values is 1-q Now let s look at a search for key k, which tests j bits Chance that all of these are 1 by chance is (1-q) j By chance means: Case when k is not in S Thus, C n =(1-q) j *C p + (1-(1-q) j )*0 In our case, for j=5: 0.001; j=10: 0.000027 Marius Kloft: Alg&DS, Summer Semester 2016 46

Average Case Assume we look for all possible values ( U =u=2 32 ) with the same probability (u-n)/u of the searches are negative, n/u are positive Average cost per search is c avg := ((u-n)*c n + n*c p ) / u For j=5: 0,14 For j=10: 0,13 Larger j decreases average cost, but increase effort for each single test What is the optimal value for j? Much better than sorted lists Marius Kloft: Alg&DS, Summer Semester 2016 47

Exemplary questions Assume A =a and S =n and a uniform hash function. What is the fill degree of A? What is the AC search complexity if collisions are handled by direct chaining? What if collisions are handled by separate chaining? Assume the following hash functions h= and S being integers. Show A after inserting each element from S={17,256,13,44,1,2,55, } Describe the standard JAVA hash function. When is it useful to provide your own hash functions for your own classes? Marius Kloft: Alg&DS, Summer Semester 2016 48