Extendible Chained Bucket Hashing for Main Memory Databases. Abstract

Similar documents
CSIT5300: Advanced Database Systems

Chapter 17. Disk Storage, Basic File Structures, and Hashing. Records. Blocking

Hashed-Based Indexing

Selection Queries. to answer a selection query (ssn=10) needs to traverse a full path.

Module 5: Hash-Based Indexing

Hash-Based Indexing 1

Hash-Based Indexes. Chapter 11. Database Management Systems 3ed, R. Ramakrishnan and J. Gehrke 1

Hash-Based Indexes. Chapter 11

Systems Infrastructure for Data Science. Web Science Group Uni Freiburg WS 2014/15

Lecture 8 Index (B+-Tree and Hash)

Chapter 12: Indexing and Hashing. Basic Concepts

Module 4: Tree-Structured Indexing

Chapter 12: Indexing and Hashing

Access Methods. Basic Concepts. Index Evaluation Metrics. search key pointer. record. value. Value

Hashing. Data organization in main memory or disk

Database System Concepts, 6 th Ed. Silberschatz, Korth and Sudarshan See for conditions on re-use

Chapter 11: Indexing and Hashing

Module 3: Hashing Lecture 9: Static and Dynamic Hashing. The Lecture Contains: Static hashing. Hashing. Dynamic hashing. Extendible hashing.

Indexing: Overview & Hashing. CS 377: Database Systems

Kathleen Durant PhD Northeastern University CS Indexes

Extra: B+ Trees. Motivations. Differences between BST and B+ 10/27/2017. CS1: Java Programming Colorado State University

TUTORIAL ON INDEXING PART 2: HASH-BASED INDEXING

Chapter 11: Indexing and Hashing" Chapter 11: Indexing and Hashing"

Chapter 13 Disk Storage, Basic File Structures, and Hashing.

Hash-Based Indexes. Chapter 11 Ramakrishnan & Gehrke (Sections ) CPSC 404, Laks V.S. Lakshmanan 1

Chapter 12: Indexing and Hashing (Cnt(

Chapter 1 Disk Storage, Basic File Structures, and Hashing.

LH*Algorithm: Scalable Distributed Data Structure (SDDS) and its implementation on Switched Multicomputers

Background: disk access vs. main memory access (1/2)

Symbol Table. Symbol table is used widely in many applications. dictionary is a kind of symbol table data dictionary is database management

Introduction to Indexing 2. Acknowledgements: Eamonn Keogh and Chotirat Ann Ratanamahatana

A HASHING TECHNIQUE USING SEPARATE BINARY TREE

Physical Level of Databases: B+-Trees

FILE SYSTEMS. CS124 Operating Systems Winter , Lecture 23

Chapter 17 Indexing Structures for Files and Physical Database Design

Material You Need to Know

Chapter 12: Indexing and Hashing

File Directories Associated with any file management system and collection of files is a file directories The directory contains information about

Indexing Methods. Lecture 9. Storage Requirements of Databases

Hashing. Hashing Procedures

Tree-Structured Indexes

Main Memory and the CPU Cache

Packet Classification Using Dynamically Generated Decision Trees

Chapter 11: Indexing and Hashing

X-tree. Daniel Keim a, Benjamin Bustos b, Stefan Berchtold c, and Hans-Peter Kriegel d. SYNONYMS Extended node tree

Chapter 11: Indexing and Hashing

CST-Trees: Cache Sensitive T-Trees

More B-trees, Hash Tables, etc. CS157B Chris Pollett Feb 21, 2005.

Announcements. Reading Material. Recap. Today 9/17/17. Storage (contd. from Lecture 6)

COSC160: Data Structures Hashing Structures. Jeremy Bolton, PhD Assistant Teaching Professor

Advances in Data Management Principles of Database Systems - 2 A.Poulovassilis

Introduction hashing: a technique used for storing and retrieving information as quickly as possible.

CS 350 Algorithms and Complexity

Hashing file organization

Database index structures

Chapter 8. Virtual Memory

Intro to DB CHAPTER 12 INDEXING & HASHING

CS143: Index. Book Chapters: (4 th ) , (5 th ) , , 12.10

Chapter 12. File Management

Comp 335 File Structures. Hashing

Topics to Learn. Important concepts. Tree-based index. Hash-based index

4 Hash-Based Indexing

Chapter 11: Indexing and Hashing

key h(key) Hash Indexing Friday, April 09, 2004 Disadvantages of Sequential File Organization Must use an index and/or binary search to locate data

CARNEGIE MELLON UNIVERSITY DEPT. OF COMPUTER SCIENCE DATABASE APPLICATIONS

Physical Disk Structure. Physical Data Organization and Indexing. Pages and Blocks. Access Path. I/O Time to Access a Page. Disks.

Tree-Structured Indexes

PROCESS VIRTUAL MEMORY. CS124 Operating Systems Winter , Lecture 18

CSE 530A. B+ Trees. Washington University Fall 2013

Data and File Structures Chapter 11. Hashing

Understand how to deal with collisions

Multi-way Search Trees. (Multi-way Search Trees) Data Structures and Programming Spring / 25

9/24/ Hash functions

Database System Concepts, 5th Ed. Silberschatz, Korth and Sudarshan See for conditions on re-use

Question Bank Subject: Advanced Data Structures Class: SE Computer

Indexing: B + -Tree. CS 377: Database Systems

Design Issues 1 / 36. Local versus Global Allocation. Choosing

Virtual Memory. Chapter 8

IS 709/809: Computational Methods in IS Research. Algorithm Analysis (Sorting)

9/29/2016. Chapter 4 Trees. Introduction. Terminology. Terminology. Terminology. Terminology

Chapter 18 Indexing Structures for Files. Indexes as Access Paths

The Adaptive Radix Tree

Chapter 6. Hash-Based Indexing. Efficient Support for Equality Search. Architecture and Implementation of Database Systems Summer 2014

CS 350 : Data Structures B-Trees

File Management By : Kaushik Vaghani

Hash-Based Indexes. Chapter 11. Comp 521 Files and Databases Spring

Striped Grid Files: An Alternative for Highdimensional

IMPORTANT: Circle the last two letters of your class account:

Hash Table and Hashing

Indexing. Week 14, Spring Edited by M. Naci Akkøk, , Contains slides from 8-9. April 2002 by Hector Garcia-Molina, Vera Goebel

Information Systems (Informationssysteme)

CS 525: Advanced Database Organization 04: Indexing

7. Query Processing and Optimization

System Structure Revisited

CS 245 Midterm Exam Winter 2014

File System Interface and Implementation

Tree-Structured Indexes

Tree-Structured Indexes

Introduction. for large input, even access time may be prohibitive we need data structures that exhibit times closer to O(log N) binary search tree

Unit 6 Chapter 15 EXAMPLES OF COMPLEXITY CALCULATION

Transcription:

Extendible Chained Bucket Hashing for Main Memory Databases Pyung-Chul Kim *, Kee-Wook Rim, Jin-Pyo Hong Electronics and Telecommunications Research Institute (ETRI) P.O. Box 106, Yusong, Taejon, 305-600, South Korea Abstract The objective of this paper is to develop a high performance hash-based access method for main memory database systems. Chained bucket hashing is known to provide the fastest random access to a static file stored in main memory. For a dynamic file, however, chained bucket hashing is inappropriate because its address space cannot be adapted to the file size without total reorganization. Extendible hashing which has been proposed in disk-based environment has the ability to accommodate its address space as the file grows or shrinks. However, extendible hashing is impractical in main memory environment because of its large directory size. In this paper, we introduce a new hash-based access method called extendible chained bucket hashing. The method is a complementary integration of chained bucket hashing and extendible hashing for dynamic files in main memory databases. We carried out an experiment to compare our hashing scheme with two existing proposals: linear hashing modified for main memory environment and controlled search multidirectory hashing. The experiment shows that extendible chained bucket hashing outperforms other proposals in both key loading time and search time. 1 Introduction As memory chips become cheaper and their density increases, it becomes feasible to maintain the entire database contents in main memory in many applications (Garcia-Molina 1992). A database system should implement an access method to support fast random access to a record having a particular key in a large file. Obviously, it is desirable for the access * The current address of the author is Korea Computer Communications (KCOM), Ltd., Yusong-Hanjin Officetel #1015, Pongmyung 535-5, Yusong, Taejon, 305-301, South Korea. He can be reached through the E-mail address, pckim@kcomtj.co.kr.

methods of main memory resident database systems to take advantage of the fast random access speed of main memory devices. Hashing scheme is well known technique that permits very fast random access to a large file through key-to-address calculation. Several hashing schemes have been proposed in both main memory and disk-based databases. They differ in how to expand the address space and how to deal with overflows incurred by collisions in which more than one key has the same hash value. In main memory environment, chained bucket hashing (CBH) is known to provide the fastest access time to a static file (Knuth 1973). CBH scheme, however, may not be directly applicable for a dynamic file. This is because it requires global reorganization of the hash table to expand or shrink its address range. Without such reorganization, CBH suffers from low storage utilization or long chain length when the number of inserted keys is too small or large, respectively. In disk-based environment, much of research has focused on hashing scheme that has ability to adapt its address space for files with dynamically changing size. These include extendible hashing (EH) (Fagin et al. 1979), linear hashing (LH) (Litwin 1980) and its extensions. Clearly, these schemes may not be suitable for main memory resident database systems. This is because the access time for main memory is order of magnitude less than disk devices and there is no block transfer concept in main memory. Lehman has investigated the performance and storage utilization of these schemes (Lehman 1986). In particular, EH scheme suffers from the large size of directory when leaf node size is relatively small, and LH scheme shows too slow response time for insertion and search. More recently, two new hashing schemes were proposed for main memory resident database systems: a linear hashing modified for hash table stored in main memory (Larson 1988), and controlled search multidirectory hashing (CSMH) (Analyti and Pramanik 1992). Unlike the original LH for disk files, a directory entry of the modified LH contains a pointer, which is the head of a chain of the records hashing to that entry. The average chain length is controlled by a parameter. If the average chain length becomes greater than the control parameter after an insertion, the modified LH splits its directory in the predefined linear order that has been used in the original LH. CSMH uses a tree-structured hash directory to access a data file. The hash directory consists of a set of subdirectories that adapt their size dynamically to a changing number of records. This adaptation does not require any total reorganization of the directory structure since the insertion and deletion of a record affect only one node of the tree. The size and the height of the hash directory is variable. To access a record, the hash address bits of the key are partitioned into distinct parts. Each part is used to index a subdirectory at a particular level. In CSMH, the expected number of key comparisons for a successful search is bounded by a control parameter that is given by user. Both the modified LH and CSMH achieve linearly increasing expected directory size over the number of inserted keys. One shortcoming of the modified LH is that the time to transform a key to address is not constant, which is due to the linear expansion of address space. The characteristic of CSMH has been analyzed in only a very high performance margin where the average number of key comparisons for a successful search is less than 2. Also, the key loading time of CSMH has not been analyzed. Both hashing schemes assume that hash values of records are unique, which may not be attainable in a practical application. The objective of this paper is to develop a high performance hash-based access method

for dynamic files stored in main memory. We introduce a new hashing scheme called extendible chained bucket hashing (ECBH) which is a seamless integration of EH and CBH. ECBH replaces each leaf node in EH by a chained bucket hash table and record identifier chains. In ECBH scheme, a hash table with sufficiently large size (e.g., 1024 entries) is expected to cause the same effect of a leaf node with large size in the original EH. That is, the directory size is negligible even in a large file. Unlike a leaf node in original EH, however, the search time within a hash table can be controlled in O(1). That is, ECBH scheme inherits high performance from CBH and gradual extensibility from EH, respectively. Performance and storage utilization of ECBH is controlled by a parameter given by users. We analyze the performance characteristics of ECBH and compare ECBH with modified LH and CSMH through experiment. The experiment results show that ECBH outperforms modified LH and CSMH in both key loading time and search cost. ECBH and modified LH have similar storage requirement. In particular, the storage overhead of ECBH is smaller than that of CSMH in reasonable performance margin. We describe how to refine ECBH structure to cope with duplicate keys. The rest of the paper is organized as follows. Section 2 reviews original EH and CBH schemes and describes how to integrate them to our ECBH structure. The section also presents a detailed algorithm for split. We explain how ECBH can be refined to deal with duplicated key insertions. Section 3 analyzes ECBH performance characteristics and compares ECBH with other proposals by experiment. Section 4 contains concluding remarks. 2 Extendible Chained Bucket Hashing 2.1 Review of Extendible Hashing and Chained Bucket Hashing We briefly review original CBH and EH methods. CBH consists of a hashing function, a hash table and linked lists of record identifiers (RIDs) * (see Figure 1) (Knuth 1973). The key hash table RID lists RID 1 RID 3 hash function address... RID 5 RID n Figure 1. Chain bucket hashing structure. * A record in a database file is identified by its unique RID. In general, an index of a file contains RIDs rather than the records themselves because more than one index may be built on the same file. Furthermore, in main memory databases, even key values need not to be stored in an index because the key values can be accessed very efficiently through pointers.

global depth local depth key hash function hash value 3 2 000 R(000) R(100) 001 3 010 R(010) 011 100 1 101 R(001) R(101) R(111) 110 3 111 R(110) directory leaf nodes Figure 2. Extendible hashing structure. hash function transforms a key value into an address value whose range is the cardinality of the hash table. The address is used to locate an entry of the hash table. The entry points to a list of RIDs having the same hash address. The time to compute the hash value of a key and to locate an entry of the hash table is constant. When keys are uniformly distributed, the m expected chain length of CBH equals 1 +, where m is the number of inserted keys and D D is the hash table size (Knuth 1973). When D is fixed, therefore, the search cost linearly increases as m grows. This performance characteristic is not desirable because the primary goal of a hash-based access method is to achieve search time in O(1). It is possible to expand the hash table (e.g., doubling the hash table) whenever the expected chain length exceeds a threshold value. However, the expansion requires a total reorganization of the hash table and re-linking of the RIDs, which may cause an intolerable performance drop in a typical main memory resident database application. EH employs a hash function, a dynamic directory, and leaf nodes pointed to by entries of the directory (see Figure 2) (Fagin et al. 1979). A key is transformed into a hash value. The global depth least significant bits of the hash value is used to locate a directory entry. The entry points to a leaf node which is fixed size (generally, a disk page). A leaf node with local depth d contains RIDs hash values of which have the same d least significant bits. Therefore, a leaf node may be shared by more than one entry of the directory. In the figure, R(x) represents the RID of a record where x is a sequence of global depth least significant bits of its hash value. When a leaf node with local depth d overflows, it splits into two nodes based on (d + 1)-th least significant hash address bit of its RIDs, and increases d. These two nodes correspond to siblings when the directory is represented as a binary radix search tree. In Figure 2, the nodes with local depth 3 (i.e., those containing R(010) and R(110), respectively) are siblings. If local depth of a leaf node is greater than the global depth after a split, the directory doubles to make room to point the split leaf nodes. Let b be the maximum number of RIDs in a leaf node. When b > 1, Flajolet has inves-

3.92 D b 1 m + 1 b tigated the expected number D of directory entries of EH with a coarse approximation (Flajolet 1983), where m is the inserted records. Therefore, the directory size of EH with sufficient large node size increases almost linearly as m grows. In fact, the expected directory size is negligible compared to the storage for leaf nodes. For example, if b = 1024, then after a million inserts, the number of directory entries is expected to be 4068.8. For such a large node size, however, the search cost would not be acceptable in many high performance main memory resident databases. As b becomes closer to 1, the number of directory entries grows in more than linear. For b = 1, for example, it has been shown that doubling the number of inserted records increases the number of hash address bits needed to differentiate the hash addresses by two. That is, the expected directory size of EH with leaf node size 1 is in O(m 2 ) (Analyti and Pramanik 1992). 2.2 Integrating Extendible Hashing and Chained Bucket Hashing From the above observation, we develop a new hashing scheme called extendible chained bucket hashing (ECBH) by combining EH and CBH seamlessly. ECBH replaces each leaf node in EH by a chained bucket hash table and RID chains. In ECBH method, a hash table with sufficiently large size (e.g., 1024 entries) is expected to cause the same effect of a leaf node with large size in the original EH. This implies that the directory size is negligible even in a large file. Unlike a leaf node in the original EH, however, the search time within a hash table can be controlled in O(1). That is, ECBH scheme inherits the high performance of CBH and the gradual extensibility of EH at the same time. Figure 3 shows an example of ECBH structure. ECBH consists of a hash function, a directory, several hash tables and linked lists of RIDs. The directory in ECBH grows and shrinks in the same way of the original EH. Hash tables have the same fixed cardinality. Unlike in the original CBH, each hash table is associated with a local depth. The local depth has the same meaning that of a leaf node in the original EH. A key K is transformed into a hash value H. We view a hash value H as a concatenation of two sequences of bits: <H 1 :H 2 >. We use H 2 to locate an entry of directory (i.e., to locate a hash table) and H 1 to locate an entry of the hash table. If the cardinality of a hash table is C, the number of bits reserved for H 1 is calculated as log C. In practice, the cardinality C should be power of two in order to take advantage of bit operator rather than modulo operator. The remaining least significant bits of H form H 2 which determines the maximum global depth. For example, if H is represented 32 bits and the cardinality C is 1024, 10 most significant bits of H form H 1 and remaining 22 least significant bits represent H 2. In this case, the maximum global depth will be 22. The performance of ECBH is controlled by a parameter l supplied by users. The parameter l is used to control the average chain length of ECBH. Average chain length L avg

global depth 2 local depth R(000) key hash function hash value 000 001 010 011 100 101 110 3.. 3.. 1.. R(100) R(010) R(101) R(001) R(111) 111 directory 3.. R(110) is computed as hash table RID lists Figure 3. Extendible chained bucket hashing structure. m 1 L avg = L m RIDi i = 1 where L RIDi represents the number of chains preceding RID i, and m is the number of inserted RIDs. In Figure 3, for example, L R(111) is 2 and L avg is 8/7. After a record is inserted to a hash table with local depth d, L avg is computed and checked if it exceeds l. If it exceeds, the hash table splits into two tables based on (d + 1)-th least significant hash address bit of its RIDs, and increases d. If the new local depth is greater than the global depth, the directory is doubled to make room to point the split hash tables. If the global depth reaches the maximum value, no more split is processed even when the average chain length exceeds the control parameter. Figure 4 shows the algorithm to split a hash table in more detail. The maximum global depth g max is calculated as (h - logc), where h is the number of bits of a hash value and C is the cardinality of a hash table. Insertion, search, and deletion are straightforward. In particular, deletion of a record may leave a hash table empty. When one of siblings becomes empty, we simply drop it, and change the directory entries containing it to point to its sibling. The local depth of its sibling is decreased by one. If every pointer in the directory equals to its sibling pointer, we halve the size of the directory and decrease the global depth by one. It is possible to use another parameter to control minimum storage utilization. Two siblings can be merged if their storage usage is less than the control parameter even when both are not empty.

Algorithm SPLIT input: P - the hash table to be split // The followings are constants: // C: cardinality of a hash table // l: control parameter for average chain length // g max : maximum global depth 1. If local depth of P has reached to g max, then stop. 2. Allocate a new hash table Q. 3. For 1 i C, do the following. 3.1 For each RID R in the list pointed by i-th entry of P, do the following. 3.1.1 Form key value K from R. 3.1.2 Compute hash value H from K. 3.1.3 If (d + 1)-th least significant bit of H is 1, where d is the local depth of P, then move R to i-th entry of Q. 4. Increase local depth of P by one, and set local depth of Q to that of P. 5. If local depth of P is greater than the global depth, double the directory (the new area is simply filled with the original directory contents), and increase global depth by one. 6. Set every sibling entry of the directory that points P to Q (in order to make the directory comply with Step 3.1.3). 7. Calculate average chain length L avg, and if it is greater than l, then SPLIT using one of P or Q that has larger average chain length. Figure 4. Algorithm SPLIT for ECBH. 2.3 Refinements for Practical Use As in the case of other hash-based access methods, ECBH may also be skewed if hash values are not uniformly distributed. Even though the hash function itself provides very good randomization, the degeneration can happen if many duplicated keys are inserted. In theory, none of modified LH, CSMH, and ECBH can store two RIDs keys of which are the same, while keeping the average chain length at 1.0. The directory will grow infinitely. However, there are many database applications that require very fast key-associative access to a large file that contains many duplicate keys. In order to broaden the applicability of ECBH, we present the following refinements to ECBH scheme. First, we can limit the infinite growth of the directory by using a reasonable maximum global depth. Recall that the maximum global depth g max is computed as (h - logc), where h is the number of bits of a hash value and C is the cardinality of a hash table. This number may be too large depending on application. For this case, users may introduce a different maximum global depth k g max for a particular ECBH. Then, the number of directory entries of the ECBH is limited by 2 k. Second, we can introduce another parameter u to control storage utilization of a hash table. Storage utilization of a hash table is defined as the ratio of the number of non-empty entries to total number of entries. A hash table is not split if its storage utilization is less than u even though the average chain length has exceeded the control parameter l. When the storage utilization of a hash table becomes less than u after a deletion, we test if merging it with

its sibling still results in lower storage utilization than u. If the test is true, we merge them and decrease the local depth by one. The merging does not require any computation of hash values. Instead, an RID list linked from i-th entry of one hash table is simply appended to i- th entry of its sibling. Third, splitting a hash table may introduce an empty hash table when all RIDs are moved to one hash table. We can implement an empty hash table without actually occupying the amount of storage for a non-empty hash table. It is sufficient to reserve one variable for local depth to represent an empty hash table. 3 Performance of Extendible Chained Bucket Hashing 3.1 Performance Analysis The number of hash tables in ECBH grows linearly as in modified LH. The difference is that modified LH splits one chain at a time and the chain to be split is selected as a predefined sequence. Both modified LH and ECBH use separate chaining for collision resolution. The probability that all hash tables in ECBH appear at two successive level (i.e., the difference of local depth is 1) is nearly 1 for practical number of inserted records and the cardinality of hash table (Fagin et al. 1979). This implies that we can view hash tables of ECBH as two classes: expanded tables and not-expanded tables, which correspond to expanded entries and not-expanded entries in modified LH, respectively. Therefore, we expect that total number of hash table entries in ECBH approximately equals to the number of directory entries of modified LH under given number of inserted keys and average chain length. Further, the expected number of records per hash table entry would be approximately the same that of modified LH, which is computed as (Analyti and Pramanik 1992), where L avg is the average chain length. This will be verified through the experiment. In ECBH, the time to compute a hash value is constant. This is one of the most significant difference from modified LH in which the time shows cyclic oscillation. The number of index accesses is two: one for the directory and another for a hash table. The average number of key comparisons for successful search is L avg, which is controlled by a user parameter. 3.2 Performance Experiment 3.2.1 Implementation Details 8 L 1 avg 3 log e Modified LH requires contiguous memory for the directory while it expands. We implemented a dynamic array which is suggested by (Larson 1988). The size of a segment for the dynamic array is set to 1024 in order to take advantage of bit operators when locating a directory entry. The directory size shown in results includes the storage overhead for the dynamic array. A directory entry takes 8 bytes: a four-byte pointer to its RID chain and a four-byte integer to remember the number of RIDs in the chain. A directory entry in CSMH may become a head of either an RID list or a subdirectory.

We implement a directory entry in 8 bytes with the help of the union facility of C programming language. The first byte of each entry is used to identify the type of the entry. If the type is the head of an RID list, then the remaining area of the entry contains the number of RIDs and the start address of the list. If the type is the head of a subdirectory, the remaining area describes its depth, the number of its subdirectories, and start address of the array of its entries. For ECBH test, we set the cardinality of a hash table to 1024. The size of a directory entry is 4 bytes that contain the address of its hash table. The size of a hash table entry is 8 bytes: a four-byte pointer to the RID chain and a four-byte integer to store the number of RIDs in the chain. Each hash table is associated with 12 byte control information: local depth, chain length, and number of stored RIDs. All test programs are implemented in the C programming language. We use 32 bits to represent a hash value. The hash function is implemented based on permuted table (Pearson 1990). The length of a key is 8 bytes. Keys are populated using library function random(3) which employs a non-linear additive feedback random number generator. Each key set contains 50,000 items which are unique. In fact, their hash values are also unique. Library function malloc(3) is used to allocate or expand memory area. Library function memcmp(3) is used to compare two key values. A record identifier takes 4 bytes. The machine platform is Sun4c with 24 megabytes physical memory on which SunOS 4.1.1 is running. We locked test program code and data segment in core to keep them from swapping out to disk. All test programs were run in single-user mode. All measurement is the average of 100 runs each of which uses a set of keys generated with a different seed. 3.2.2 Experiment Results In all strategies, storage overhead grows linearly as the number of inserted keys increases (see Figure 5). The storage space requirement for EBCH and modified LH is almost the same. In a very high performance margin (i.e., when the average chain length is close to directory and hash table size (kilobytes) 600 500 400 300 200 100 CSMH ECBH Modified LH 0 0 10000 20000 30000 40000 50000 number of inserted keys Figure 5. Storage cost vs. inserted number of keys.

directory and hash table size (kilobytes) 10000 1000 100 CSMH ECBH Modified LH 10 1 2 3 4 5 6 7 8 average chain length Figure 6. Storage cost vs. average chain length. keys per directory (or hash table) entry 18 16 14 12 10 8 6 4 2 CSMH ECBH Modified LH 0 1 2 3 4 5 6 7 8 average chain length Figure 7. Keys per directory entry vs. average chain length. 1.0), the amount of storage space for modified LH and ECBH grows up to about 4 megabytes (see Figure 6). This implies that both schemes basically follow the storage utilization characteristic of chained bucket hashing. As pointed out in (Analyti and Pramanik 1992), CSMH reveals the best storage utilization when average chain length is less than 2.0. However, as the length grows, CSMH takes more storage than others (see Figure 6, Figure 7). Note that storage cost in the figures does not include memory space to store RID chains because this space is the same regardless of hashing strategies. The results in Figure 7 verifies that the expected number of records per hash table entry obtained by analysis complies with the experiment result. Figure 8 shows the time to load 50,000 keys with different average chain length. Modified LH and ECBH have almost the same shape of graphs. We believe that performance difference comes from the cost to locate a directory entry and the number of function calls to

14 12 CSMH ECBH Modified LH loading time (seconds) 10 8 6 4 2 1 2 3 4 5 6 7 8 average chain length Figure 8. Loading time vs. average chain length. 12 10 CSMH ECBH Modified LH loading time (seconds) 8 6 4 2 0 0 10000 20000 30000 40000 50000 number of inserted keys Figure 9. Loading time vs. number of inserted keys when the average chain length is fixed to 1.5. split. Modified LH relies on modulo operators to locate a directory entry from a hash value while ECBH makes use of bit operators. Modified LH splits one RID chain at a time and calls the split function at every split, while ECBH splits one hash table at a time. CSMH suffers from too frequent reorganization of subdirectories. Still less, it is not easy to reuse or preallocate memory areas to reduce memory management cost because the size varies over subdirectories. As average chain length grows, the loading time becomes closer to that of ECBH. Figure 9 presents loading time when average chain length is fixed to 1.5. All strategies show linearly increasing response time as number of inserted keys grows. Figure 10 represents average time to search 50,000 records in each hashing scheme.

45 40 35 CSMH ECBH Modified LH search time (seconds) 30 25 20 15 10 The true average time for each hashing scheme lies within less than 3% of the observed average time with 95% confidence. ECBH outperforms both modified LH and CSMH, and its performance is linear. In modified LH, locating a directory entry requires extra computation if a hash value falls into expanded portion of directory entries. The portion of expanded entries varies over average chain length. As the expanded portion increases, the probability that a hash value falls into that portion increases. That is, the time to locate a directory entry increases. This explains that the search time of modified LH has a periodic function form. In the case of CSMH, the search time decreases slightly even when average chain length increases from 1.05 to 1.5. This is because increasing average chain length makes the directories more balanced, thereby reducing the number of directory accesses. In overall, modified LH is turned out slower than both ECBH and CSMH because of the cost of modulo operation to locate a directory entry. CSMH is slower than ECBH because of the overhead to traverse its tree-structured subdirectories. 4 Concluding Remarks 5 0 1 2 3 4 5 6 7 8 average chain length Figure 10. Search time vs. average chain length. In this paper, we investigated high performance hash based access methods for main memory database systems. Original chained bucket hashing (CBH) provides very fast response time, but is not suitable for dynamic files. Original extendible hashing (EH) supports gradual extensibility of address space with local reorganization, but suffers from large directory size in high performance applications. From the observation, we proposed a new hashing scheme named extendible chained bucket hashing (ECBH) which is a seamless integration of original EH and CBH. EBCH replaces each leaf node in EH by a chained bucket hash table and RID chains. In ECBH, a hash table with sufficiently large size (e.g., 1024 entries) causes the same effect of a leaf node with large size in the original EH. This implies that the directory size is negligible even in a large file. Unlike a leaf node in the original EH, however, the search time within a hash table can be controlled in O(1). That is, ECBH inherits the high performance and the gradual extensibility from CBH and EH, respectively. Performance and

storage utilization of ECBH is controlled by a parameter given by users. We also described how to refine ECBH structure to deal with duplicate keys and skewed key insertions. We analyzed the performance characteristics of ECBH and carried out an experiment to compare ECBH with other strategies proposed for main memory databases: linear hashing (LH) modified for main memory and controlled search multidirectory hashing. The experimental results show that ECBH outperforms the proposals in both key loading time and search cost. ECBH and modified LH require similar storage space. In particular, the directory and hash table size of ECBH is smaller than that of CSMH in a reasonable performance margin. Currently, we have implemented a multiuser main memory storage system called FLASH employing ECBH as its primary access method. The FLASH system organizes shared memory as set of fixed partitions (16K bytes). We use one partition to store the directory of ECBH. Each hash table is maintained in a separate partitions. A node in RID lists is implemented as a record of the FLASH file manager. References Analyti, A. and Pramanik, S. (1992) Fast Search in Main Memory Databases. Proc. of ACM SIGMOD Conf. on Management of Data, pp. 215-224. Fagin, R., Nievergelt, J., Pippenger, N., Strong, H., (1979) Extendible Hashing - A Fast Access Method for Dynamic Files. ACM Trans. on Database Systems, Vol. 4, No. 3, pp. 315-344. Flajolet, P. (1983) On the Performance Evaluation of Extendible Hashing and Trie Searching. Acta Informatica, Vol. 20, pp. 345-369. Garcia-Molina, H. (1992) Main Memory Database Systems: An Overview. IEEE Trans. on Knowledge and Data Engineering, Vol. 4, No. 6, pp. 509-516. Kim, P.-C. et al. (1994) FLASH: A Main Memory Storage System. Korea Database Journal, Vol. 1, No. 2, pp. 103-125. Knuth, D.E. (1973) The Art of Computer Programming, Vol. 3, Addison Wesley. Larson, P.A. (1988) Dynamic Hash Tables. Comm. of ACM, Vol. 31, No. 4, pp. 446-457. Lehman, T. (1986) Design and Performance Evaluation of a Main Memory Relational Database System. Tech. Report #656, CS Dept., Univ. of Wisconsin-Madison. Litwin, W. (1980) Linear Hashing: A New Tool for File and Table Addressing. Proc. of 6th VLDB Conf., pp. 212-223. Pearson, P.K. (1990) Fast Hashing of Variable Length Text Strings. Comm. of ACM, Vol. 33, No. 6, pp. 677-680. Acknowledgments We thank the anonymous reviewers for valuable comments. In particular, they helped us enhance the confidence and quality of the presentation of the paper.