Efficient Keyword Search for Smallest LCAs in XML Databases

Similar documents
Efficient LCA based Keyword Search in XML Data

Top-k Keyword Search Over Graphs Based On Backward Search

PathStack : A Holistic Path Join Algorithm for Path Query with Not-predicates on XML Data

SpiderX: Fast XML Exploration System

An Effective and Efficient Approach for Keyword-Based XML Retrieval. Xiaoguang Li, Jian Gong, Daling Wang, and Ge Yu retold by Daryna Bronnykova

Processing XML Keyword Search by Constructing Effective Structured Queries

ISSN: (Online) Volume 2, Issue 3, March 2014 International Journal of Advance Research in Computer Science and Management Studies

Improving Data Access Performance by Reverse Indexing

Compression of the Stream Array Data Structure

Keyword search in relational databases. By SO Tsz Yan Amanda & HON Ka Lam Ethan

A New Indexing Strategy for XML Keyword Search

CoXML: A Cooperative XML Query Answering System

Evaluating XPath Queries

Symmetrically Exploiting XML

Implementation of Skyline Sweeping Algorithm

Full-Text and Structural XML Indexing on B + -Tree

Efficient Engines for Keyword Proximity Search

Supporting Fuzzy Keyword Search in Databases

Keyword Search over Hybrid XML-Relational Databases

Binary Trees, Binary Search Trees

MAXLCA: A NEW QUERY SEMANTIC MODEL FOR XML KEYWORD SEARCH

Accelerating XML Structural Matching Using Suffix Bitmaps

ICRA: Effective Semantics for Ranked XML Keyword Search

Structural Consistency: Enabling XML Keyword Search to Eliminate Spurious Results Consistently

Data Structures and Algorithms

Chapter 12: Query Processing

TwigINLAB: A Decomposition-Matching-Merging Approach To Improving XML Query Processing

TwigList: Make Twig Pattern Matching Fast

CSE 530A. B+ Trees. Washington University Fall 2013

Path-based Keyword Search over XML Streams

An Efficient XML Index Structure with Bottom-Up Query Processing

Retrieving Meaningful Relaxed Tightest Fragments for XML Keyword Search

9/29/2016. Chapter 4 Trees. Introduction. Terminology. Terminology. Terminology. Terminology

Section 5.5. Left subtree The left subtree of a vertex V on a binary tree is the graph formed by the left child L of V, the descendents

A Secondary storage Algorithms and Data Structures Supplementary Questions and Exercises

Query Processing & Optimization

Chapter 12: Query Processing. Chapter 12: Query Processing

Query Processing. Debapriyo Majumdar Indian Sta4s4cal Ins4tute Kolkata DBMS PGDBA 2016

Query Evaluation Strategies

Schema-Based XML-to-SQL Query Translation Using Interval Encoding

CSCI2100B Data Structures Trees

A New Way of Generating Reusable Index Labels for Dynamic XML

Physical Level of Databases: B+-Trees

Binary Trees

DDE: From Dewey to a Fully Dynamic XML Labeling Scheme

Chapter 13: Query Processing

Chapter 12: Query Processing

Outline. Approximation: Theory and Algorithms. Ordered Labeled Trees in a Relational Database (II/II) Nikolaus Augsten. Unit 5 March 30, 2009

Trees : Part 1. Section 4.1. Theory and Terminology. A Tree? A Tree? Theory and Terminology. Theory and Terminology

CHAPTER 3 LITERATURE REVIEW

TREES. Trees - Introduction

Introduction. for large input, even access time may be prohibitive we need data structures that exhibit times closer to O(log N) binary search tree

Chapter 12: Indexing and Hashing. Basic Concepts

Indexing. Week 14, Spring Edited by M. Naci Akkøk, , Contains slides from 8-9. April 2002 by Hector Garcia-Molina, Vera Goebel

Trees. Q: Why study trees? A: Many advance ADTs are implemented using tree-based data structures.

LAF: a new XML encoding and indexing strategy for keyword-based XML search

! A relational algebra expression may have many equivalent. ! Cost is generally measured as total elapsed time for

Chapter 13: Query Processing Basic Steps in Query Processing

CS301 - Data Structures Glossary By

An Extended Byte Carry Labeling Scheme for Dynamic XML Data

Efficient Processing of Complex Twig Pattern Matching

Searching SNT in XML Documents Using Reduction Factor

Chapter 12: Indexing and Hashing

Efficiently Enumerating Results of Keyword Search

Chapter 11: Indexing and Hashing

Outline. Depth-first Binary Tree Traversal. Gerênciade Dados daweb -DCC922 - XML Query Processing. Motivation 24/03/2014

Structural consistency: enabling XML keyword search to eliminate spurious results consistently

CSIT5300: Advanced Database Systems

Question Bank Subject: Advanced Data Structures Class: SE Computer

An Implementation of Tree Pattern Matching Algorithms for Enhancement of Query Processing Operations in Large XML Trees

A Survey Of Algorithms Related To Xml Based Pattern Matching

Advances in Data Management Principles of Database Systems - 2 A.Poulovassilis

Roadmap. Roadmap. Ranking Web Pages. PageRank. Roadmap. Random Walks in Ranking Query Results in Semistructured Databases

Database System Concepts

Searching Databases with Keywords

DbSurfer: A Search and Navigation Tool for Relational Databases

Friday Four Square! 4:15PM, Outside Gates

Navigation- vs. Index-Based XML Multi-Query Processing

MQEB: Metadata-based Query Evaluation of Bi-labeled XML data

Trees. (Trees) Data Structures and Programming Spring / 28

A FRACTIONAL NUMBER BASED LABELING SCHEME FOR DYNAMIC XML UPDATING

An approach to the model-based fragmentation and relational storage of XML-documents

Keyword query interpretation over structured data

Analysis of Algorithms

Cohesiveness Relationships to Empower Keyword Search on Tree Data on the Web

Advanced Database Systems

RELATIVE QUERY RESULTS RANKING FOR ONLINE USERS IN WEB DATABASES

Efficient Common Items Extraction from Multiple Sorted Lists

CE 221 Data Structures and Algorithms

XML Query Processing. Announcements (March 31) Overview. CPS 216 Advanced Database Systems. Course project milestone 2 due today

Module 3. XML Processing. (XPath, XQuery, XUpdate) Part 5: XQuery + XPath Fulltext

MID TERM MEGA FILE SOLVED BY VU HELPER Which one of the following statement is NOT correct.

B-Trees and External Memory

FINALTERM EXAMINATION Fall 2009 CS301- Data Structures Question No: 1 ( Marks: 1 ) - Please choose one The data of the problem is of 2GB and the hard

Data Structure. IBPS SO (IT- Officer) Exam 2017

Tree. A path is a connected sequence of edges. A tree topology is acyclic there is no loop.

[ DATA STRUCTURES ] Fig. (1) : A Tree

March 20/2003 Jayakanth Srinivasan,

CSC148 Week 6. Larry Zhang

B-Trees and External Memory

Transcription:

Efficient Keyword Search for Smallest LCAs in XML Databases Yu Xu Department of Computer Science & Engineering University of California San Diego yxu@cs.ucsd.edu Yannis Papakonstantinou Department of Computer Science & Engineering University of California San Diego yannis@cs.ucsd.edu ABSTRACT Keyword search is a proven user-friendly way to query HTML documents in the World Wide Web. We propose keyword search in XML documents modeled as labeled trees and describe corresponding efficient algorith. The proposed keyword search returns the set of smallest trees containing all keywords where a tree is designated as smallest if it contains no tree that also contains all keywords. Our core contribution the Indexed Lookup Eager algorithm exploits key properties of smallest trees in order to outperform prior algorith by orders of magnitude when the query contains keywords with significantly different frequencies. The Scan Eager variant is tuned for the case where the keywords have similar frequencies. We analytically and experimentally evaluate two variants of the Eager algorithm along with the Stack algorithm [3]. We also present the XKSearch system which utilizes the Indexed Lookup Eager Scan Eager and Stack algorith and a demo of which on DBLP data is available at http://www.db.ucsd.edu/projects/xksearch. Finally we extend the Indexed Lookup Eager algorithm to answer Lowest Common Ancestor (LCA) queries.. INTRODUCTION Keyword search is a proven user-friendly way of querying HTML documents in the World Wide Web. Keyword search is well-suited to XML trees as well. It allows users to find the information they are interested in without having to learn a complex query language or needing prior knowledge of the structure of the underlying data [ 5 2 5 6 8]. For example assume an XML document named School.xml modeled using the conventional labeled tree model in Figure that contains information including classes projects etc. A user interested in finding the relationships between John and Ben issues a keyword search John Ben and the search system returns the most specific relevant answers - the subtrees rooted at nodes and. The meaning of the answers is obvious to the user: Ben is a TA for John for the CS2A class Ben is a student in the CS3A class taught by John both John Work supported by NSF ITR 33384 Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise to republish to post on servers or to redistribute to lists requires prior specific permission and/or a fee. SIGMOD June 4-6 2005 Baltimore Maryland USA Copyright 2005 ACM -59593-060-4/05/06...$5.00. and Ben are participants in a project. According to the Smallest Lowest Common Ancestor (SLCA) semantics the result of a keyword query is the set of nodes that (i) contain the keywords either in their labels or in the labels of their descendant nodes and (ii) they have no descendant node that also contains all keywords. The answer to the keyword search John Ben is the node list [ ]. The subtree rooted at the Class node with id is a better answer than the subtrees rooted at Classes or School because it connects John and Ben more closely than the Classes or School elements. We can use an XML query language such as XQuery to search on John Ben to find the most specific elements. One possible query is shown in Figure 2. It is complex and difficult to be executed efficiently. We could write a query that is schema specific and more efficient. However it would require existence and knowledge of the schema of School.xml knowledge of the roles John and Ben may play in the document and knowledge of the relationships they may have. We make the following technical contributions in the paper: We propose two efficient algorith Indexed Lookup Eager and Scan Eager for keyword search in XML documents according to the SLCA semantics. Both algorith produce part of the answers quickly so that users do not have to wait long to see the first few answers. The Indexed Lookup Eager algorithm outperfor known algorith and Scan Eager by orders of magnitude when the keyword search includes at least one low frequency keyword along with high frequency keywords. In particular the performance of the algorithm primarily depends on the number of occurrences of the least frequent keyword and the number of keywords in the query; it does not depend significantly on the frequencies of the more frequent keywords of the query (the precise worst-case complexity for a main memory version and a disk-based version is provided). The Indexed Lookup Eager algorithm is important in practice since the frequencies of keywords typically vary significantly. In contrast Scan Eager is tuned for the case where the occurrences of the query s keywords do not vary significantly. We experimentally evaluate the Indexed Lookup Eager algorithm the Scan Eager algorithm and the prior work Stack algorithm [3]. The algorith are incorporated into the XK- Search system (XML Keyword Search) which utilizes B- tree indices provided by BerkeleyDB [4]. A demo of the XKSearch system running on DBLP data is available at http://www.db.ucsd.edu/projects/xksearch. The experiments show that the Indexed Lookup Eager algorithm outperfor the Scan Eager and the Stack algorith 527

School 0 Dean 0.0 Classes 0. Projects 0.2 SportsClub 0.3 John 0.0.0 Class 0..0 Class 0.. Class 0..2 Class 0..3 Class 0..4 Autonet 0.2.0 P2P 0.3.0 OSP 0.3. Instructor 0..0.0 Title 0...0 Instructor 0... TA 0...2 Instructor 0..2.0 Students 0..2. Title 0..2.2 Title 0..3.0 Title 0..4.0 Participants 0.2.0.0 Participants 0.3.0.0 Participants 0.3..0 John 0..0.0.0 CS2A 0...0.0 John 0...0 Ben 0...2.0 John 0..2.0.0 Ben CS3A CS4A 0..2..0 0..2.2.0 0..3.0.0 CS5A 0..4.0.0 John 0.2.0.0.0 Ben 0.2.0.0. Ben 0.3.0.0.0 Ben 0.3..0.0 Figure : School.xml (each node is associated with its Dewey number) answer for $a in document( School.xml )//* where empty(for $b in $a/* where some $c in $b//* satisfies $c= John and some $c in $b//* satisfies $c= Ben return $b) and some $d in $a//* satisfies $d= John and some $d in $a//* satisfies $d= Ben return $a /answer Figure 2: XQuery: find answers for John Ben often by orders of magnitude when the keywords in the query have different frequencies and loses only by a small margin when the keywords have similar frequencies. Indeed in the DBLP demo only the Indexed Lookup Eager algorithm is used. In Section 2 we introduce the notation and definitions used in the paper. Section 3 presents the SLCA problem and its solutions. We discuss the Indexed Lookup Eager and the Scan Eager algorith and the sort-merge based Stack algorithm [3]. Section 3 also provides the complexity of the main memory implementations of the algorith. In Section 4 we discuss the XKSearch implementation of the Indexed Lookup Eager Scan Eager and Stack algorith and provide their complexity in ter of number of disk accesses. In Section 5 we discuss the All Lowest Common Ancestor (LCA) problem and extend the SLCA s Indexed Lookup algorithm to efficiently find LCAs in shallow trees. Our experimental results are discussed in Section 6. We discuss related work in Section 7 and conclude in Section 8. 2. NOTATION We use the conventional labeled ordered tree model to represent XML trees. Each node of the tree corresponds to an XML element and is labeled with a tag. We assign to each node a numerical id that is compatible with preorder numbering in the sense that if a node precedes a node in the preorder leftto-right depth-first traversal of the tree then. The XKSearch implementation uses Dewey numbers as the id s. Prior work has shown that Dewey numbers are a good id choice [23]. In addition Dewey numbers provide a straightforward solution to locating the LCA of two nodes. The usual relationship is assumed between any two Dewey numbers. For example. Obviously the relationship on Dewey numbers is compatible with the requirement for preorder numbering. Given a list of keywords and an input XML tree an answer subtree of keywords is a subtree of such that it contains at least one instance of each keyword. A smallest answer subtree of keywords is an answer subtree (of keywords ) such that none of its subtrees is an answer subtree (of keywords ). The result of a keyword search on an input XML tree is the set of the roots of all smallest answer subtrees of. For presentation brevity we do not explicitly refer to the input XML tree and simply write when is obvious. Given a list of keywords and an input XML tree we assume denotes the keyword list of i.e. the list of nodes whose label directly contains sorted by id. denotes that node is an ancestor of node ; denotes that or. Given a node is called an ancestor node in if there exists a node in such that. When the set is implied by the context we simply say is an ancestor node. Notice that if then and the other direction is not always true. The function computes the lowest common ancestor or LCA of nodes and returns null if any of the arguments is null. Given two nodes and their Dewey numbers is the node with the Dewey number that is the longest common prefix of and and the cost of computing is where is the maximum depth of the tree. For example the LCA of nodes and is the node in Figure. Given sets of nodes a node belongs to if there exist such that. Then is called an LCA of sets. A node belongs to the smallest lowest common ancestor (SLCA) of if and!" # ". is called a SLCA of sets if 528

. Notice that the query result is and that = where removes ancestor nodes from its input. The function computes the right match of in a set that is the node of that has the smallest id that is greater than or equal to ; computes the left match of in a set that is the node of that has the biggest id that is less than or equal to. ( ) returns null when there is no right (left) match node. The cost of ( ) is since it takes steps (Dewey number comparisons) to find the right (left) match node and the cost of comparing two Dewey numbers is. The function returns the other argument when one argument is null and returns the descendant node when and have ancestor-descendant relationship. The cost of the function is. 3. ALGORITHMS FOR FINDING THE SLCA OF KEYWORD LISTS This section presents the core Indexed Lookup Eager algorithm its Scan Eager variation and the prior work Stack algorithm [3]. A brute-force solution to the SLCA problem computes the LCAs of all node combinations and then removes ancestor nodes. Its complexity is. Besides being inefficient the brute-force approach is blocking. After it computes an LCA for some it cannot report as an answer since there might be another set of nodes " " such that " ". The complexity analysis given in this section is for main memory cases. We will give disk access complexity in Section 4 after we discuss the implementation details of how we compress and store keyword lists on disk. In the sequel we choose to be the smallest keyword list since where is any permutation of and there is a benefit in using the smallest list as as we will see in the complexity analysis of the algorith. 3. The Indexed Lookup Eager Algorithm (IL) The Indexed Lookup Eager algorithm is based on four properties of SLCAs which we explain starting from the simplest case where and is a singleton. According to the above Property () we compute the LCA of and its left match in the LCA of and its right match in and the singleton formed from the deeper node from the two LCAs is. Property () is based on the following two observations. For any two nodes to the right (according to preorder) of a node if then ; similarly for any two nodes to the left of a node if then 2. We generalize to arbitrary when the first set is a singleton. Notice the recursiveness in Property (2). The right or left match of a node in is itself if. This may happen when a node s label contains multiple keywords. 2 The two observations apply to inorder and postorder as well. Next we generalize to arbitrary. Property (3) straightforwardly leads to an algorithm to compute : first computes for each ( ) is the answer. Each is computed by using Properties (2) and (). The benefit of the above algorithm over the brute force approach is that for each node in the algorithm does not compute for all but computes a single where each ( ) is computed by the match functions ( and ). The complexity of the algorithm is or where ( ) is the minimum (maximum) size of keyword lists through because for each node in the algorithm needs to find a left and a right match in each one of the other keyword lists. Finding a match in list costs. Hence the total cost of match operations is. The total cost of the and operations is and hence is dominated by the cost of the match operations. The factor is attributed to the cost of removing ancestors operation. The subroutine based on the following two lemmas computes efficiently by removing ancestor nodes on the fly. LEMMA. Given any two nodes and a set if and then. LEMMA 2. For any two nodes and a set such that and if is not an ancestor of then for any such that #. Consider sorted by id where. Let! where. According to Lemma if where node appears after in! (that is " ) then is an ancestor node. Thus when computing the list! we can discard the out-of-order nodes such as. The resulting list! is in order and contains the nodes of. However! is not necessarily ancestor node free. Consider any two adjacent nodes in! where is after. If is not an ancestor of then cannot be an ancestor of any node that is after in! (according to Lemma 2) which means is a!# of. Lemma and 2 together lead to the subroutine that computes efficiently. Line #5 in applies Lemma to remove out-of-order nodes and lines #6-8 apply Lemma 2 to identify a SLCA as early as possible. As can be seen from at any time only three nodes (" ) are needed in memory. Consider =[ ] =[ ] (the keyword lists for John and Ben respectively). In the first iteration of the loop at line #3 " =0 (line #4). At the end of the first iteration " (line #8). In the second iteration ". In the third iteration ". In the fourth iteration (line #4). Notice that the condition at 529

line #6 is true in the fourth iteration and " (node ) is determined to be a SLCA (line #7). Then " (line #8). In the last iteration and node is determined to be a SLCA (line #7). Finally node is returned as a SLCA (line #0). Thus the answer to John Ben is [ ]. subroutine " 2 " //" initially 3 for each node 4 5 if ( " ) 6 if (" ) 7 " " " ; 8 " ; 9 0 return " " We can derive an algorithm from to compute efficiently: based on the following Property (4) An algorithm based on accesses and first then in order. It accesses the keyword lists in just one round. It is a blocking algorithm since it only processes the last keyword list after it completely processes the first keyword lists and then starts to produce answers. Obviously the algorithm based on Property (3) is also a blocking algorithm. The Indexed Lookup Eager Algorithm improves the algorithm based on by adding eagerness - it returns the first part of the answers without having to completely go through any of the keyword lists and it pipelines the delivery of SLCAs. Assume there is a memory buffer size of nodes. The Indexed Lookup Eager algorithm first computes where is the first nodes of. Then it computes and so on until it computes. All nodes in except the last node are guaranteed to be SLCAs because of Lemma and 2 and are returned. The last node of ( in line #0) is carried on to the next operation (lines #6-9) to be determined whether it is a!# or not. The above operation is repeated for the next nodes of until all nodes in have been processed. The smaller is the faster the algorithm produces the first SLCA. If again only three nodes are needed to be kept in memory in the whole process. However a smaller may delay the computation of all SLCAs when considering disk accesses. If the Indexed Lookup Eager algorithm is exactly the same as the algorithm based on. The complexity analysis of the Indexed Lookup Eager algorithm is the same as that of the algorithm based on Property (3) except that there is no operation to remove ancestor nodes from a set. Thus the complexity of the Indexed Lookup Eager algorithm is where ( ) is the minimum (maximum) size of keyword lists through. Consider the query John Ben Class applied on the data of Figure. The keyword lists for John Ben Class are =[ ] =[ ] and =[ ] respectively. Assume. In the first iteration of the loop at line #2 =[ ] (line #3). After the computation at line #4-5 = ( ([ ]) ) which is = ( )=. Initially ". Line #6-9 has no effect. As mentioned before every node in except the last one is a SLCA and returned (line #0 #). In the first iteration line # outputs nothing and (node ) is carried to the next iteration to be determined whether it is a SLCA or not. In the second iteration of loop at line #2 the rest nodes of is read =[ ] (line #3). After executing line #4 and #5 0..2.0.00.2.0.0.0 which is =. The condition at line #8 is true thus (node ) carried from the first iteration is determined to be a SLCA and returned (line #9). Then at line #0. Line # outputs nothing in this iteration again. Since there are no more nodes in line #3 is executed. Thus the Indexed Lookup Algorithm returns for John Ben Class. ALGORITHM (INDEXED LOOKUP EAGER ALGORITHM). Assume we have a memory buffer of size nodes " 2 while( there are more nodes in ) 3 Read P nodes of into buffer. 4 for 5 6 if ( " ) 7 removefirstnode(b) 8 if ( " # ) 9 output 0! output ; = 2 3 output 3.2 Scan Eager Algorithm When the occurrences of keywords do not differ significantly the total cost of finding matches by lookups may exceed the total cost of finding matches by scanning the keyword lists. We implement a variant of the Indexed Lookup Eager Algorithm named Scan Eager Algorithm to take advantage of the fact that the accesses to any keyword list are strictly in increasing order in the Indexed Lookup Eager algorithm. The Scan Eager algorithm is exactly the same as the Indexed Lookup Eager algorithm except that its and implementations scan keyword lists to find matches by maintaining a cursor for each keyword list. In order to find the left and right match of a given node with id in a list the Scan Eager algorithm advances the cursor of until it finds the node that is closest to from the left or the right side. Notice that nodes from different lists may not be accessed in order though nodes from the same list are accessed in order. The complexity of the Scan Eager algorithm is ( + ) or because there are Dewey number comparisons and operations. 3.3 The Stack Algorithm The stack based sort-merge algorithm (DIL) in XRANK [3] which also uses Dewey numbers is modified to find SLCAs and is called the Stack Algorithm here. Each stack entry has a pair of components. Assume the components from the bottom entry to a stack entry are respectively. Then the stack entry denotes the node with the Dewey number. is an array of length of boolean values where means that the subtree rooted at the node denoted by the stack entry directly or indirectly 530

J B C 0 T F F 0 F F F 0 F F F (a) node F F T T F T 0 T F T (d) node J B C 0 F F T F F F 0 T F F (b) node 0 T F F F F F F F T T F T 0 T F T (e) node 0 T F F 0 F F F 0 F F T F F F 0 T F F (c) node 0 F T F 2 F F F T F T F F F 0 T F T (f) node 2 F F T F F F 0 F F F (g) node report as a SLCA Figure 3: States of stack where J stands for John B stands for Ben and C stands for Class contains the keyword. For example the top entry of the stack in Figure 3(b) denotes the node and the middle entry denotes the node. The Stack algorithm merges all keyword lists and computes the longest common prefix of the node with the smallest Dewey number from the input lists and the node denoted by the top entry of the stack. Then it pops out all top entries containing Dewey components that are not part of the common prefix. If a popped entry contains all keywords then the node denoted by is a SLCA. Otherwise the information about which keywords contains is used to update its parent entry s array. Also a stack entry is created for each Dewey component of the smallest node which is not part of the common prefix effectively pushing the smallest node onto the stack. The above action is repeated for every node from the sort merged input lists. Consider again the query John Ben Class applied on the data of Figure. The keyword lists for John Ben Class are [0.0.0 0..0.0.0 0...0 0..2.0.0 0.2.0.0.0] [0...2.0 0..2..0 0.2.0.0. 0.3.0.0.0 0.3..0.0] and [0..0 0.. 0..2 0..3. 0..4] respectively. Initially the smallest node is and Figure 3(a) shows the initial state of the stack where in the top entry denotes that the node ( ) represented by the top entry contains the first keyword John. The next smallest node is the Class node. Since the longest common prefix of and is (line #4) the top two entries are popped out (line #6). contains John and this information is passed to the current top entry (lines #2-3). Then two new entries from the two components of node that are not among the longest common prefix are pushed into the stack (line #6). Notice that in in the bottom entry denotes that the node (School) contains the keyword John. Each figure in Figure 3 shows the state of the stack after processing the node shown in the caption. For example when the algorithm processes node the initial stack is shown in Figure 3(f) and the stack after processing is shown in Figure 3(g). The longest common prefix of and the stack ( ) is (line #4). Thus the top three entries are popped out (line #6). When popping out the third entry the algorithm reports as a SLCA since its array contains all (line #7). Notice that the for Ben from the top entry in Figure 3(f) is used in the decision that the third entry is a SLCA. Figure 3(b) The complexity of Stack is since both the number of operations and the number of Dewey number comparisons are. The Scan Eager algorithm has several advantages over the Stack algorithm. First the Scan Eager algorithm starts from the smallest keyword list does not have to scan to the end of every keyword list and may terminate much earlier than the Stack algorithm as we will see in an example soon. Second the number of operations of the Scan Eager algorithm ( ) is usually much less than that of the Stack algorithm ( ). Third the Stack algorithm operates on a stack whose depth is bounded by the depth of the input tree while the Scan Eager algorithm with only needs to keep three nodes in the whole process and no push/pop operations are involved. ALGORITHM 2 (STACK ALGORITHM). stack=empty 2 while (has not reached the end of all keyword lists) 3 //find the largest p such that 4 5 while ( ) 6 stackentry=stack.pop() 7 if (isslca(stackentry)) //Any other stack entry cannot represent a SLCA 8 output stackentry as a SLCA 9 set all entries of the Keywords array of any stack entry all falses 0 else //pass keyword witness information to the top entry 2 " 3 if (stackentry.keywords[j]=true) stack.top.keywords[j]=true 4 5 //add non-matching components of to stack 6 " stack.push(v[j][]) stack.top.keywords[i]=true 7 8 9 check entries of the stack and return any SLCA if exists isslca(stackentry) if (stackentry.keywords[i]=true for ) return true else return false getsmallestnode /* returns the node with the smallest Dewey number from all keyword lists and advances the cursor of the list where is from. Assume is an array consisting of its Dewey number components. For example is 0 3 if its Dewey number is 0..3 */ We consider again the query John Ben Class applied on the data of Figure and assume the tree does not have the Ben node with id to show the first advantage of the Scan Eager algorithm over the Stack algorithm. Hence =[ ] =[ ] 3 and =[ ]. For node in the Scan Eager algorithm finds its match in and computes their LCA ; then it finds the match for in and computes their LCA. Since node is an ancestor of node node is discarded and no further access to is needed. The last node in accessed by the Scan Eager algorithm is node. The Stack algorithm has to visit all nodes because it cannot tell whether the last node 3 For the sake of this example we neglect that. 53

CS2A Dean Participants SportsClub Autonet Ben Class Classes CS2A CS3A CS4A CS5A Instruc Partici Dean John OSP P2P tor pants Proje cts Scho ol Sports Club Stud ents T A Title 0.2.0 0...2.0 0..2..0 0.2.0.0. 0.3.0.0.0 0.3..0.0 0..0 0.. 0..2 0..3 0..4 0...0.0 0. 0..2.2.0 0..3.0.0 0..4.0.0 0.0 0..0.0 0... 0..2.0 0.0.0 0..0.0.0 0...0 0..2.0.0 0.2.0.0.0 0.3. 0.2.0.0 0.3.0.0 0.3..0 0.3.0 0.2 0 0.3 0..2. 0...2 0...0 0..2.2 0..3.0 0..4.0 Figure 4: B+ tree from the data of Figure for Scan Eager and Stack algorith may lead to a SLCA or not until it comes to process node and it has to repeatedly compute the longest common ancestor of each node with the node represented by the top entry of the stack. Notice that the Class list may have arbitrarily many nodes after node and before node 4 that Stack has to access but Scan Eager does not need to. 4. XKSEARCH SYSTEM IMPLEMENTATION In this section we present the architecture of the XKSearch implementation then discuss how the keyword lists are compressed and stored on disk-based B tree index structures and finally provide disk access complexity analysis summarized in Table for the three algorith discussed in Section 3 - Indexed Lookup Eager Scan Eager and Stack. We implemented the Indexed Lookup Eager Scan Eager and Stack algorith in Java using the Apache Xerces XML parser and Berkeley DB [4]. The architecture of the implementation (XK- Search) is shown in Figure 6. The LevelTableBuilder reads an input XML document and outputs a level table!. The inverted index builder reads in the level table! and outputs a keyword list for each keyword in. Those keyword lists are stored in a B-tree structure that allows efficient implementation of the match operations. The index builder also generates a frequency table which records the frequencies of keywords in is read into memory by the initializer and is stored as a hash table. The query engine accepts a keyword search uses the frequency hash table to locate the smallest keyword list executes the Indexed Lookup Eager Scan Eager and the Stack algorith and returns all SLCAs. For performance reasons Dewey numbers are compressed. We introduce a level table! with entries where is the depth of the input tree. The entry! denotes the maximum number of bits needed to store the -th component in a Dewey number i.e.! where is the number of children of the node at the level of that has the maximum number of children among all nodes at the same level. The root is at level! 5. In general bytes are needed to store the Dewey number of a node at level. The level table! for Figure is i LT(i) 2 3 2 There are two types of B tree structures implemented in XK- Search; the first is for the Indexed Lookup Eager algorithm the second is for the Scan Eager and the Stack algorith. In the implementation of the Indexed Lookup Eager algorithm we put all keyword lists in a single B+ tree where keywords are the primary key and Dewey numbers are the secondary key (See Figure 5 where 4 Of course the Dewey number of the node would be changed accordingly. 5 We could have!. However the root is conveniently represented by. each block can store up to four entries). No data values are associated with keys since the keys contain both keywords and Dewey numbers. Given a keyword and a Dewey number it takes a single range scan operation [] to find the right and left match of in the keyword list of. Since B+ tree implementations usually buffer top level nodes of the B+ tree in memory we assume the number of disk accesses for finding a match in a keyword list does not include the accesses to the non-leaf nodes in the B+ tree and is. The number of disk accesses of the Indexed Lookup Eager is because for each node in the IL algorithm needs to find a left and a right match in each one of the other keyword lists. Notice that the number of disk accesses of IL cannot be more than where is the number of blocks of the keyword list. This is because the IL algorithm accesses all keyword lists strictly in order. In the implementation of the Scan Eager algorithm and the algorithm the keys in the B+ tree are simply keywords. The data associated with each key is the list of Dewey numbers of the nodes directly containing the keyword (See Figure 4). All keyword lists are clustered. The number of disk accesses of Scan Eager or Stack is where. is the average number of nodes in a disk block of and is the number of nodes in the keyword list. depends on the page size the depth of the XML tree and the maximum out-degrees of nodes at each level. is at least where is the maximum out-degree of nodes at level. In our experiments using the DBLP dataset on average is around. The full size keyword lists are not needed to compute SLCAs according to the following property where. is called the core-keyword list of the keyword. To turn a keyword list into a core-keyword list the brute-force algorithm compares each node to every other node. Given any two nodes such that if is not an ancestor of then for any such that cannot be an ancestor of. IndexBuilder (Figure 6) uses an algorithm that produces all core-keyword lists in one pass of parsing an input XML document based on the above fact. The description of the algorithm is omitted to save space. 5. THE ALL LOWEST COMMON ANCES- TOR PROBLEM (ALCA) We can use the Indexed Lookup Eager algorithm to derive an efficient algorithm to find all LCAs that is LCAs for each combination of nodes in through. Because an LCA is either an ancestor of a SLCA or is a SLCA itself we can find all LCAs by walking up in the tree beginning from SLCAs. We solve the ALCA problem by first finding the list! of all SLCAs and then for each 532

Ben 0.3.0.0.0 Class CS2A Dean 0..2 0...0.0 0.0 John0.0.0 John 0.2.0.0.0 Particip ants 0.3..0 Sports Club 0.3 Title 0..2.2 Ben Ben Ben Autonet 0... 0..2. 0.2.0. 0.2.0 2.0.0 0. Ben Class Class Ben 0.3.0 0.3..0.0.0.0 0..0 0.. Clas Clas Clas Clas s s s ses 0..2 0..3 0..4 0. CS2A CS3A CS4A CS5A 0...0. 0..2.2. 0..3.0. 0..4.0 0 0 0.0 Dea Instruct Instruc Instruc n or tor tor 0.0 0..0.0 0... 0..2.0 John John John John 0..0. 0... 0..2. 0.0.0 0.0.0 0.0 Particip Partici John OSP ants pants 0.2.0.0.0 0.3. 0.2.0.0 0.3.0.0 Particip P2P Proje Sch ants 0.3. cts ool 0.3..0 0 0.2 0 Spor Stude Titl TA tscl nts e 0.. ub 0..2. 0...2 0.3.0 Title Title Title 0..2. 0..3.0 0..4.0 2 Figure 5: B+ tree from the data of Figure for Indexed Lookup Eager Algorithm Input XML document T LevelTableBuilder level table LT IndexBuilder table Btree (Keyword lists stored in Btree) Initializer Figure 6: XKSearch Architecture P q.t P v c v... r a b q.t+ s d v 3 v 2 Figure 7: Finding all LCAs Keyword Search hash table Answer Query Engine ancestor of each node in! check whether is an LCA as explained next. Let be a SLCA. Consider any node that is an ancestor of (See Figure 7). If the subtree rooted at contains a node with a keyword say that is not under node and is not an ancestor of then is an LCA. To determine whether contains such a node we use at most two lookups. The nodes under but not under are divided into two parts by the path from to. Let node be the right match node of in (the keyword list of ). If node is not under and is not under then is in the left part of (otherwise would be under because is a SLCA) which means is an LCA. Next let be the child of on the path from to. If the Dewey id of is then the Dewey id of is where is the ordinal number of among its siblings. The node which is the immediate right sibling of has Dewey number and is called the uncle node of under. Let be the right match node of in. If then is in the right side of which makes an LCA. The existence of any node containing a keyword from under but not under can be checked similarly. The subroutine!# in Algorithm 3 is based on the above observations. Since a node might be an ancestor node of multiple SLCAs we want to avoid repeatedly checking whether is an LCA. Instead of maintaining some data structures to record whether a node has been identified as an LCA or not we use an approach that only needs to keep three nodes in memory. Let be the first three nodes (see Figure 7) in the list! of SLCAs produced by the Eager algorithm and let. For each node in the path from to we check whether is an LCA or not. Then we check each node in the path from to 6 and so on. Algorithm 3 is based on this approach and guarantees that each of the ancestor nodes of all SLCAs is checked exactly once. Notice that in Algorithm 3 we do not need to produce all SLCAs first. Algorithm 3 pipelines the delivery of LCAs since Algorithm IL pipelines the delivery of SLCAs. The number of disk accesses of Algorithm 3 is. The main memory complexity of Algorithm 3 is. Finding all SLCAs costs. Checking whether the ancestors of the!# are LCAs or not costs since we need to check nodes and checking each node costs. ALGORITHM 3 findlca(list L) (COMPUTING ALL LCAS). //! is the list of SLCAs! ; while! has more nodes = ; = removehead(!); current-lca=lca( ); for each ancestor node of until current-lca //not including current-lca if (checklca( )==true) output. boolean checklca( ) for to = rm( ) if ( ) return true; for to "= the uncle node of under ; " ; if ( ) return true; return false; 6. EXPERIMENTS We have run XKSearch on the DBLP data 7. We filter out citation and other information only related to the DBLP website and group first by journal/conference names then by years. The experiments have been done on a.2ghz laptop with 52MB of RAM. An online demo which enables keyword search in the same grouped 83MB DBLP data used in the experiments is provided at http://www.db.ucsd.edu/projects/xksearch. The demo runs as a Java Servlet using Apache Jakarta Tomcat server. The Xalan engine is used to translate XML results to HTML. We evaluate the Scan Eager Indexed Lookup Eager and Stack algorith discussed in Section 4 for the SLCA semantics by varying the number and frequencies of keywords both on hot cache (Figures 8 9 and 0) and on cold cache (Figures 2 and 3). A program randomly chose forty queries for each experiment. The response time of each experiment on hot cache in Figures 8 9 6 could be a descendent of. 7 http://www.informatik.uni-trier.de/ ley/db 533

# of disk main memory operations main memory accesses # operations # # Dewey number comparisons complexity operations IL Scan Stack Table : Complexity Analysis for Indexed Lookup Eager Scan Eager and Stack where ( ) is the minimum (maximum) size of keyword lists through is the total number of blocks of all keyword lists on disk and is the maximum depth of the tree. and 0 is the average of the corresponding forty queries after five executions. The response time of each experiment on cold cache in Figures 2 and 3 is the average of the corresponding forty queries each of which was run just once after a machine reboot. In Figure 8 each query contains two keywords. The smaller frequency is shown in the caption while the bigger frequency is variable. For example each query of Figure 8(c) in the of the large list=000 category contains two keywords where one of them has frequency of while the other keyword has frequency of. As can be seen from Figure 8 the performance of the Scan Eager and Stack algorith degrades linearly when the size of the large keyword list increases while the run time for IL algorithm is essentially constant and its performance is often several orders of magnitude better than Scan Eager and Stack. In all experiments Scan Eager perfor a little better than Stack for the reasons explained in Section 3.3. In Figure 9 each query contains a keyword of small frequency ( in Figure 9(a) in Figure 9(b) in Figure 9(c) in Figure 9(d)) and all other keywords of have frequency of. For example each query of Figure 9(b) in the category contains five keywords where one of them has frequency of and the other four have frequency of. The performance of the Scan Eager and the Stack algorith is essentially independent of. Keywords in Figure 0 have the frequencies shown in the caption. The Scan Eager and the Stack algorith perform a little better than the Indexed Lookup Eager algorithm in most experiments since the Indexed Lookup Eager perfor best when the frequencies of keyword lists vary greatly while all keyword lists in Figure 0 have the same size and the cost of index lookups is more likely greater than the cost of a single scan. We repeated the experiments in Figures 8 9 and 0 with cold cache and the results are reported in Figures 2 and 3 respectively. We see similar relationships among the Scan Eager Indexed Lookup Eager and Stack algorith. However the differences between the performance of algorith is not as significant as those in the hot cache experiments. The reason is that most keyword lists do not take many pages. Hence making a random access on the list is effectively equivalent to fetching the complete list. Notice that disk access time dominates any main memory cost as can be seen from the significant response time increases from the hot cache experiments to the cold cache experiments. We implemented XKSearchB that stores Dewey numbers without using a level table as discussed in Section 4. Experiments show that the size of the keyword lists and the time to construct them are proportional to the size of the input XML documents. On average the size of indexes constructed by XKSearch is of XKSearchB; the construction time of XKSearch is of XK- SearchB; the query response time of XKSearch for hot cache is 70% of XKSearchB for the queries in Figures 8 9 and 0. 7. RELATED WORK LCA computation and proximity search are the two areas most related to this work. Computing the LCA of two nodes has been extensively studied and constant time algorith are known for main memory data structures [20 26]. These algorith were designed without the concern of minimizing disk accesses. For example to compute the LCA of two nodes in [20] two lookups may be needed even after we adapt data structures for disk access minimization. Computing the LCA of nodes using the Dewey numbers does not need any disk access which is the reason we use Dewey numbers. Works on integrating keyword search into XML query languages [9 0 2 24 25] augment a structured query language with keyword search primitive operators. [2] proposes a meet operator which operates on multiple sets where all nodes in the same set are required to have the same schema which is a special case of the all LCAs problem. The meet operator of two nodes and is implemented efficiently using joins on relations where the number of joins is the number of edges on the path from to their LCA. This technique is good for relational database implementations. The XXL search engine [25] extends an SQL-like syntax with ranking and ontological knowledge for similarity metrics. In BANKS [5] and Proximity Search [2] a database is viewed as a graph of objects with edges representing relationships between the objects. Proximity Search [2] enables proximity searches by a pair of queries. One example is Movie Travolta Cage. BANKS uses heuristics to approximate the Steiner tree problem. Discover [5] DBXplorer [] and XKeyword [6] perform keyword search on relational databases modeled as graphs. XKSearch delivers much higher efficiency than the above syste which perform keyword search on arbitrary graphs by being tuned for SLCA keyword search on trees. XRANK[3] extends Web-like keyword search to XML. Results are ranked by a Page-Rank [6] hyperlink metric extended to XML. The ranking techniques are orthogonal to the retrieval and hence can easily be incorporated in our work. The keyword search algorithm in XRANK that is relevant to our problem is adapted as the Stack algorithm in the paper which we have described and compared with the Indexed Lookup Eager and Scan Eager algorith. Notice that XRANK s query result has the following relationship to the semantics used in the paper. A recent work XSEarch [8] supports extended keyword search in XML documents and focuses on the semantics and the ranking of the results. It extends information-retrieval techniques (tfidf) to rank the results. [4] provides an optimized version of the LCA-finding stack algorithm. Most important the algorithm of [4] returns the set of LCAs along with efficiently (for performance and presentation purposes) summarized explanations on why each node is an LCA. [8] proposes a simple novel search technique called Schema- Free XQuery to enable users to query an XML document using XQuery without requiring full knowledge of the document schema. In particular [8] introduces a function named mlcas to XQuery 534

000 00 0 0 00 000 0000 (a) Legend (b) small frequency=0 0000 0000 000 000 00 00 0 0 00 000 0000 00000 000 0000 00000 (c) small frequency=00 (d) small frequency=000 Figure 8: =2 keeping the small frequency constant varying the frequency of the large keyword list (hot cache) 00000 00000 0000 0000 000 00 000 00 0 0 (a) frequency(000000) (b) frequency(0000000) 00000 00000 0000 0000 000 00 000 00 0 0 (c) frequency(00000000) (d) frequency(000000000) Figure 9: Varying the number of keywords keeping frequencies constant (hot cache) 535

0 00 0 (a) small frequency(0) (b) medium frequency(00) 000 0000 00 000 00 0 0 (c) high frequency(000) (d) large frequency(0000) Figure 0: Varying the number of keywords all keyword lists having the same size (hot cache) 0000 0000 000 000 00 00 0 0 0 00 000 0000 00 000 0000 00000 (a) small frequency=0 (b) small frequency=00 0000 000 00 0 000 0000 00000 (c) small frequency=000 Figure : =2 keeping the small frequency constant varying the frequency of the large keyword list (cold cache) 536

00000 00000 0000 0000 000 00 000 00 0 0 (a) frequency(000000) (b) frequency(0000000) 00000 00000 0000 0000 000 00 000 00 0 0 (c) frequency(00000000) (d) frequency(000000000) Figure 2: Varying the number of keywords keeping frequencies constant (cold cache) 000 000 00 00 0 0 (a) small frequency(0) (b) medium frequency(00) 0000 0000 000 000 00 00 0 0 (c) high frequency(000) (d) large frequency(0000) Figure 3: Varying the number of keywords all keyword lists having the same size (cold cache) 537

based on the concept of MLCA (Meaningful Lowest Common Ancestor) where the set of MLCAs of sets... is the same as. The complexity of the stack based algorithm proposed in [8] to compute the set of MLCAs of... which is similar to the sort merge stack algorithm in XRANK is the same as that of the Scan and Stack algorith. Similar to [4] [8] returns the set of MLCAs with explanations on why each node is an MLCA. A popular numbering scheme is to use a pair of numbers consisting of preorder and postorder numbers [2 7]. Given two nodes and their pairs of numbers it can be determined whether one of them is an ancestor of the other in constant time. However this type of scheme is inefficient in finding the LCA of two nodes. Tree query patterns for querying XML trees have received great attention and efficient approaches are known [3 7 22]. These approaches are not applicable for the keyword searching problem because given a list of keywords the number of tree patterns from the keywords is exponential in the size of the schema of the input document and the number of keywords. Finally there are research prototypes and commercial products that allow keyword searches on a collection of XML documents and return a list of (ranked) XML documents that contain the keywords [9 27]. 8. CONCLUSIONS The XKSearch system inputs a list of keywords and returns the set of Smallest Lowest Common Ancestor nodes i.e. the list of nodes that are roots of trees that contain the keywords and contain no node that is also the root of a tree that contains the keywords. For each keyword the system maintains a list of nodes that contain the keyword in the form of a tree sorted by the id s of the nodes. The key property of SLCA search is that given two keywords and and a node that contains keyword one need not inspect the whole node list of keyword in order to discover potential solutions. Instead one only needs to find the left and right match of in the list of where the left (right) match is the node with the greatest (least) id that is smaller (greater) than or equal to the id of. The property generalizes to more than two keywords and leads to the Indexed Lookup Eager algorithm whose main memory complexity is where is the maximum depth of the tree is the number of keywords in the query and ( ) is the minimum (maximum) size of keyword lists through. Assuming a B-tree disk-based structure where the non-leaf nodes of the B-tree are cached in main memory the number of disk accesses needed is. The analytical results as well as the experimental evaluation show that the Indexed Lookup Eager algorithm outperfor often by orders of magnitude other algorith when the keywords have different frequencies. We provide the Scan Eager algorithm as the best variant for the case where the keywords have similar frequencies. The experimental evaluation compares the Indexed Lookup Eager Scan Eager and Stack (described in [3]) algorith. The XKSearch system is implemented using the BerkeleyDB [4] B-tree indices and a demo of it on DBLP data is available at http://www.db.ucsd.edu/projects/xksearch. 9. REFERENCES [] S. Agrawal S. Chaudhuri and G. Das. DBXplorer: A system for keyword-based search over relational databases. In ICDE 2002. [2] V. Aguilera et al. Querying XML documents in XYleme. In SIGIR Workshop on XML and Information Retrieval 2000. [3] S. Amer-Yahia S. Cho and D. Srivastava. Tree pattern relaxation. In EDBT 2002. [4] BerkeleyDB. http://www.sleepycat.com/. [5] G. Bhalotia A. Hulgeri C. Nakhe S. Chakrabarti and S. Sudarshan. Keyword searching and browsing in databases using BANKS. In ICDE 2002. [6] S. Brin and L. Page. The anatomy of a large-scale hypertextual Web search engine. Computer Networks and ISDN Syste 30(-7):07 7 998. [7] Z. Chen H. Jagadish F. Korn and N. Koudas. Counting twig matches in a tree. In ICDE 200. [8] S. Cohen J. Namou Y. Kanza and Y. Sagiv. XSEarch: A semantic search engine for XML. In VLDB 2003. [9] D. Florescu D. Kossmann and I. Manolescu. Integrating keyword search into XML query processing. In WWW9 2000. [0] N. Fuhr and K. Grojohann. XIRQL: A Query Language for Information Retrieval in XML documents. In SIGIR 200. [] H. Garcia-Molina J. Ullman and J. Widom. Database System Implementation. Prentice-Hall 2000. [2] R. Goldman N. Shivakumar S. Venkatasubramanian and H. Garcia-Molina. Proximity Search in Databases. In VLDB 998. [3] L. Guo F. Shao C. Botev and J. Shanmugasundaram. XRANK: Ranked keyword search over XML documents. In SIGMOD 2003. [4] V. Hristidis N. Koudas Y. Papakonstantinou and D. Srivastava. Keyword Proximity Search in XML Trees. Available at http://www.db.ucsd.edu/publications/treeproximity.pdf. [5] V. Hristidis and Y. Papakonstantinou. Discover: Keyword search in relational databases. In VLDB 2002. [6] V. Hristidis Y. Papakonstantinou and A. Balmin. Keyword proximity search on XML graphs. In ICDE 2003. [7] Q. Li and B. Moon. Indexing and Querying XML data for regular path expressions. In VLDB 200. [8] Y. Li C. Yu and H. V. Jagadish. Schema-free xquery. In VLDB 2004. [9] J. Naughton et al. The Niagara Internet Query System. IEEE Data Engineering Bulletin 24(2):27 33 200. [20] B. Schieber and U. Vishkin. On finding lowest common ancestors: Simplification and parallelization. SIAM J. Computing 7(6):253 262 988. [2] A. Schmidt M. L. Kersten and M. Windhouwer. Querying XML documents made easy: Nearest concept queries. In ICDE 200. [22] D. Srivastava et al. Structural joins: A primitive for efficient XML query pattern matching. In ICDE 2002. [23] I. Tatarinov S. Viglas K. Beyer J. Shanmugasundaram E. Shekita and C. Zhang. Storing and querying ordered XML using a relational database system. In SIGMOD 2002. [24] A. Theobald and G. Weikum. Adding relevance to XML. In WebDB 2000. [25] A. Theobald and G. Weikum. The index-based XXL search engine for querying XML data with relevance ranking. In EDBT 2002. [26] Z. Wen. New algorith for the LCA problem and the binary tree reconstruction problem. Information Processing. Lett5(): -6 994. [27] XYZFind. http://www.searchtools.com/tools/xyzfind.html. 538