Overview of the Integration Wizard Project for Querying and Managing Semistructured Data in Heterogeneous Sources

Similar documents
MANAGING XML DATA IN A RELATIONAL WAREHOUSE: ON QUERY TRANSLATION, WAREHOUSE MAINTENANCE, AND DATA STALENESS

Managing Changes to Schema of Data Sources in a Data Warehouse

Browsing in the tsimmis System. Stanford University. into requests the source can execute. The data returned by the source is converted back into the

Wrapper 2 Wrapper 3. Information Source 2

Robust Mediation of Construction Supply Chain Information

I. Khalil Ibrahim, V. Dignum, W. Winiwarter, E. Weippl, Logic Based Approach to Semantic Query Transformation for Knowledge Management Applications,

Query Processing and Optimization on the Web

An Approach to Resolve Data Model Heterogeneities in Multiple Data Sources

Data Warehousing Alternatives for Mobile Environments

Interoperability in GIS Enabling Technologies

MIWeb: Mediator-based Integration of Web Sources

Data integration supports seamless access to autonomous, heterogeneous information

A System For Information Extraction And Intelligent Search Using Dynamically Acquired Background Knowledge

Ontology Construction -An Iterative and Dynamic Task

Semistructured Data Store Mapping with XML and Its Reconstruction

A CORBA-based Multidatabase System - Panorama Project

A FRAMEWORK FOR EFFICIENT DATA SEARCH THROUGH XML TREE PATTERNS

Querying XML Data. Mary Fernandez. AT&T Labs Research David Maier. Oregon Graduate Institute

THE MOMIS METHODOLOGY FOR INTEGRATING HETEROGENEOUS DATA SOURCES *

Scalable Hybrid Search on Distributed Databases

Describing and Utilizing Constraints to Answer Queries in Data-Integration Systems

Mapping Target Schemas to Source Schemas Using WordNet Hierarchies

Chapter 13 XML: Extensible Markup Language

Integrated Usage of Heterogeneous Databases for Novice Users

An ODBC CORBA-Based Data Mediation Service

PROJECT OBJECTIVES 2. PREVIOUS RESULTS

Storing and Maintaining Semistructured Data Efficiently in an Object-Relational Database

Fundamentals of. Database Systems. Shamkant B. Navathe. College of Computing Georgia Institute of Technology PEARSON.

SEEK: Scalable Extraction of Enterprise Knowledge

Extending CMIS Standard for XML Databases

Introduction to Federation Server

Features and Requirements for an XML View Definition Language: Lessons from XML Information Mediation

Data Integration and Data Warehousing Database Integration Overview

Chapter 1: Introduction

Browsing in the tsimmis System æ. Stanford University. Tel.: è415è Fax: è415è

Introduction to Semistructured Data and XML. Overview. How the Web is Today. Based on slides by Dan Suciu University of Washington

Teiid Designer User Guide 7.5.0

Some aspects of references behaviour when querying XML with XQuery

Ontology-Based Schema Integration

Teiid Designer User Guide 7.7.0

Navigational Integration of Autonomous Web Information Sources by Mobile Users

A Data warehouse within a Federated database architecture

Development of an Ontology-Based Portal for Digital Archive Services

Indexing XML Data with ToXin

Chapter 1: Introduction. Chapter 1: Introduction

Extracting Semistructured Information from the Web

Introduction to Database Systems CSE 414

STRUCTURE-BASED QUERY EXPANSION FOR XML SEARCH ENGINE

A MODEL FOR ADVANCED QUERY CAPABILITY DESCRIPTION IN MEDIATOR SYSTEMS

Bonus Content. Glossary

Protégé-2000: A Flexible and Extensible Ontology-Editing Environment

CS425 Fall 2016 Boris Glavic Chapter 1: Introduction

A Bottom-up Strategy for Query Decomposition

Mediating and Metasearching on the Internet

Database Systems: Design, Implementation, and Management Tenth Edition. Chapter 14 Database Connectivity and Web Technologies

B.H.GARDI COLLEGE OF MASTER OF COMPUTER APPLICATION. Ch. 1 :- Introduction Database Management System - 1

A Framework for Processing Complex Document-centric XML with Overlapping Structures Ionut E. Iacob and Alex Dekhtyar

Symmetrically Exploiting XML

Database Management Systems (CPTR 312)

Ontology Structure of Elements for Web-based Natural Disaster Preparedness Systems

VALUE RECONCILIATION IN MEDIATORS OF HETEROGENEOUS INFORMATION COLLECTIONS APPLYING WELL-STRUCTURED CONTEXT SPECIFICATIONS

EXTRACTION AND ALIGNMENT OF DATA FROM WEB PAGES

1 Copyright 2011, Oracle and/or its affiliates. All rights reserved.

Fundarnentals of. Sharnkant B. Navathe College of Computing Georgia Institute of Technology

Keywords Data alignment, Data annotation, Web database, Search Result Record

METADATA INTERCHANGE IN SERVICE BASED ARCHITECTURE

Chapter 1: Introduction

Ryan Stephens. Ron Plew Arie D. Jones. Sams Teach Yourself FIFTH EDITION. 800 East 96th Street, Indianapolis, Indiana, 46240

An UML-XML-RDB Model Mapping Solution for Facilitating Information Standardization and Sharing in Construction Industry

Metadata. Data Warehouse

MULTIMEDIA DATABASES OVERVIEW

Practical Database Design Methodology and Use of UML Diagrams Design & Analysis of Database Systems

Inferring Structure in Semistructured Data

AN APPROACH TO UNIFICATION OF XML AND OBJECT- RELATIONAL DATA MODELS

Resolving Schema and Value Heterogeneities for XML Web Querying

Oracle Data Integration and OWB: New for 11gR2

DESIGNING AND IMPLEMENTING THE DTD INFERENCE ENGINE FOR THE I-WIZ PROJECT HONGYU GUO

FUNDAMENTALS OF. Database S wctpmc. Shamkant B. Navathe College of Computing Georgia Institute of Technology. Addison-Wesley

A Web Service-Based System for Sharing Distributed XML Data Using Customizable Schema

A MAS Based ETL Approach for Complex Data

Sangam: A Framework for Modeling Heterogeneous Database Transformations

FedX: A Federation Layer for Distributed Query Processing on Linked Open Data

An Improving for Ranking Ontologies Based on the Structure and Semantics

Systems:;-'./'--'.; r. Ramez Elmasri Department of Computer Science and Engineering The University of Texas at Arlington

Routing XQuery in A P2P Network Using Adaptable Trie-Indexes

Querying XML data: Does One Query Language Fit All? Abstract 1.0 Introduction 2.0 Background: Querying XML documents

XViews: XML views of relational schemas

Semantic Data Extraction for B2B Integration

Conception of Information Systems Part 3: Integration of Heterogeneous Databases

The Evolution of Data Warehousing. Data Warehousing Concepts. The Evolution of Data Warehousing. The Evolution of Data Warehousing

Inferring Structure in Semistructured Data

P2P Knowledge Management: an Investigation of the Technical Architecture and Main Processes

Databases and Information Retrieval Integration TIETS42. Kostas Stefanidis Autumn 2016

Improving Resource Management And Solving Scheduling Problem In Dataware House Using OLAP AND OLTP Authors Seenu Kohar 1, Surender Singh 2

Conception of Information Systems Lecture 3: Integration of Heterogeneous Databases

Interrogation System Architecture of Heterogeneous Data for Decision Making

Introduction to Semistructured Data and XML

Whitepaper. Solving Complex Hierarchical Data Integration Issues. What is Complex Data? Types of Data

Semistructured Data: The Tsimmis Experience. Stanford University.

IJSRD - International Journal for Scientific Research & Development Vol. 4, Issue 06, 2016 ISSN (online):

Transcription:

In Proceedings of the Fifth National Computer Science and Engineering Conference (NSEC 2001), Chiang Mai University, Chiang Mai, Thailand, November 2001. Overview of the Integration Wizard Project for Querying and Managing Semistructured Data in Heterogeneous Sources Joachim Hammer Computer & Information Science & Eng. University of Florida Box 116120, Gainesville, FL 3261 USA +1 352 392 2687 Email: jhammer@cise.ufl.edu Charnyote Pluempitiwiriyawej Computer Science Department Mahidol University Rama VI Rd., Bangkok 10400 THAILAND +66 2 247 0333 Email: cccpt@mahidol.ac.th Abstract We describe the Integration Wizard (IWIZ) system for retrieving heterogeneous information from multiple data sources. IWIZ provides users with an integrated, queriable view of the information that is available in the sources without a need to know where the information is coming from and how it is accessed. Due to the popularity of the Web, we focus on sources containing semistructured data. IWIZ uses novel mediation and wrapper technologies to process multi-source queries, transform data from the native source context into the internal IWIZ model, which is based on XML, and merge the results. To improve query response time, a data warehouse is used to cache results of frequently asked queries. This paper provides an overview of the IWIZ architecture and reports on some of our experience with the prototype implementation. 1. Introduction The need to access and manage information from a variety of sources and applications using different data models, representations and interfaces has created a great demand for tools supporting data and systems integration. It has also provided continuous motivation for research projects as seen in the literature [1, 2, 5, 12]. One reason for this need was the paradigm shift from centralized to client-server and distributed systems, with multiple autonomous sources producing and managing their own data. A more recent cause for the interest in integration technologies is the emergence of E-Commerce and its need for accessing repositories, applications and legacy systems 1 [4] located across the corporate intranet or at partner companies on the Internet. 1 For simplicity, we will collectively refer to repositories, applications, and legacy sources etc. as sources. In order to combine information from independently managed data sources, integration systems need to overcome the discrepancies in the way source data is maintained, modeled and queried. Some aspects of these heterogeneities are due to the use of different hardware and software platforms to manage data. The emergence of standard protocols and middleware components, e.g. CORBA, DCOM, ODBC, JDBC, etc., has simplified remote access to many standard source systems. As a result, most of the research initiatives for integrating heterogeneous data sources have focused on resolving the schematic and semantic discrepancies that exist among related data in different sources, assuming the sources can be reliably and efficiently accessed using the above protocols. For example, Bill Kent s article [11] clearly illustrates the problems associated with the fact that the same real-world object can be represented in many different ways in different sources. There are two approaches to building integration systems: The data warehousing approach [9], which prefetches interesting information from the sources, merges the data and resolves existing discrepancies, and stores the integrated information in a central repository before users submit their queries. Since users never query the sources directly, warehousing data is an efficient mechanism to support frequently asked queries as long as the data is available in the warehouse. The second approach is referred to as virtual warehousing or mediation [20] and provides users with a queriable, integrated view of the underlying sources. No data is actually stored at the mediator, hence the term virtual warehouse. Users can query the mediator, which in turn queries the relevant sources and integrates the individual results into a format consistent with the mediator view. Unlike the warehousing approach, data retrieval and processing is done at query-time. The mediation approach is preferred when user queries are unpredictable or the contents of the sources change rapidly.

We have designed and implemented an integration system, called Information Integration Wizard (IWIZ), which combines the data warehousing and mediation approaches. IWIZ allows end-users to issue queries based on a global schema to retrieve information from various sources without knowledge about their location, API, and data representation. However, unlike existing systems, queries that can be satisfied using the contents of the IWIZ warehouse, are answered quickly and efficiently without connecting to the sources. In the case when the relevant data is not available in the warehouse or its contents are out-of-date, the query is submitted to the sources via the IWIZ mediator; the warehouse is also updated for future use. An additional advantage of IWIZ is that even though sources may be temporarily unavailable, IWIZ may still be able to answer queries as long as the information has been previously cached in the warehouse. Due to the popularity of the Web and the fact that much of the interesting information is available in the form of Web pages, catalogs, or reports with mostly loose schemas and few constraints, we have focused on integrating semistructured data [17]. Semistructured data collectively refers to information whose contents and structure are flexible and thus cannot be described and managed by the more rigid traditional data models (e.g., relational model). Middleware Front-end Querying & Browsing Interfaces (QBI) Metadata Repository IWIZ Mediator Source 1 Source 2... Warehouse Warehouse Manager IWIZ Wrapper 1 IWIZ Wrapper 2 IWIZ Wrapper n Source n Back-end Figure 1: Schematic description of the IWIZ architecture and its main components 2. Overview of the IWIZ Architecture A conceptual overview of the IWIZ system is shown in Figure 1. System components can be grouped into two categories: Storage and control. Storage components include the sources, the warehouse, and the metadata repository. Control components include the querying and browsing interface (QBI), the warehouse manager, the mediator, and the wrappers. In addition, there is information not explicitly shown in the figure, which includes the global schema, the queries and the data. The global schema, which is created by a domain expert, describes the information available in the underlying sources and consists of a hierarchy of concepts and their definitions as well as the constraints. Internally, all data are represented in the form of XML documents [19], which are manipulated through queries expressed in XML-QL [3]. The global schema, which describes the structure of all internal data, is represented as a Document Type Definition (DTD), a sample of which is shown later in the paper. The definition of concepts and terms used in the schema is stored in the global ontology [6]. As indicated in Figure 1, users interact with IWIZ through QBI, which provides a conceptual overview of the source contents in the form of the global IWIZ schema and shields users from the intricacies of XML and XML- QL. QBI translates user requests into equivalent XML-QL queries, which are submitted to the warehouse manager. In case when the query can be answered by the warehouse, the answer is returned to QBI. Otherwise, the query is processed by the mediator, which retrieves the requested information from the relevant sources through the wrappers. The contents of the warehouse are updated whenever a query cannot be satisfied or whenever existing content has become stale. Our update policy assures that over time the warehouse contains as much of the result set as possible to answer the majority of the frequently asked queries. We now describe the three main control components in detail. 2.1. Wrappers Source-specific wrappers provide access to the underlying sources and support schema restructuring [8]. Specifically, a wrapper maps the data model used in the associated source into the data model used by the integration system. Furthermore, it has to determine the correspondence between concepts presented in the global schema and those presented in the source schema and carry out the restructuring. In IWIZ, currently all of the sources are based on XML; hence, only structural conversions are necessary. These structural conversions are captured in the form of mappings, which are generated when the wrapper is configured. To generate the mappings, the wrapper uses the explicit source schema defined in the form of a DTD as well as a local ontology. This local ontology describes the meaning of the source vocabulary in terms of the concepts of the global ontology. If the underlying sources have no explicitly defined DTD, one must first be inferred by the wrapper [7]. At run-time, the wrapper receives XML-QL queries from the mediator and transforms them into equivalent XML-QL queries that can be executed on the XML

source document using the wrapper s own query processor; note, we are assuming that our sources have no query capabilities of their own. The result of the query is converted into an XML document consistent with the global IWIZ schema and returned to the mediator. IWIZ wrappers automate much of the setup and conversion specification generation; in addition, they can be generated efficiently with minimal human intervention. Details describing the IWIZ wrapper design and implementation can be found in [18]. 2.2. Mediator The mediator supports querying, reconciling and cleansing of related data from the underlying sources. The mediator accepts a user query that is written against the global schema and generates one or more subqueries that retrieve the data that is necessary to satisfy the original user query from the sources. To do this, the mediator rewrites the user query into multiple source-specific queries; furthermore, it generates a query plan that describes an execution sequence for combining the partial results from the individual sources. After the partial results have been merged, the mediator reconciles the data into the integrated result requested in the user query. Data reconciliation refers to the resolution of potential data conflicts, such as multiple occurrences of the same realworld object or inconsistencies in the data among related objects. We have classified all possible conflicts that can occur when reconciling XML-based data and developed a novel hierarchical clustering model to support automatic reconciliation. Our experiments have shown that on the average, our clustering strategy automatically reduces the number of duplicates by more than 50%, while at the same time, reduces the number of incorrectly matched objects by up to 43% compared to no clustering [14]. The knowledge needed to generate subqueries and configure the clustering model for data reconciliation is captured (with human input) in the mediation specification and used to configure the mediator at builttime. To the best of our knowledge, our IWIZ mediator is the only mediator that supports automatic reconciliation when merging the data returned to form the integrated result. Details about the classification, the clustering model as well as the mediator implementation can be found in [14, 16]. 2.3. Data Warehouse Manager In order to warehouse data items that are represented as XML document, a persistent storage manager for XML is needed. We found only a few systems for persistently storing XML/DOM objects [13]. Therefore, we decided to use Oracle 8i as the underlying storage manager and develop XML extensions for converting between XML and Oracle. The decision to use an RDBMS was based mostly on the fact that many relational database vendors are trying to enhance their systems with XML extensions as well as its maturity and widespread usage. We also developed an XML wrapper to encapsulate the functionality of Oracle and provide an API that is consistent with the XML data model used by the other IWIZ components. The XML wrapper is part of the warehouse manager, which controls and maintains information that is stored in the data warehouse. At builttime, it creates the relational database schema that corresponds to the global IWIZ schema. At run-time it translates XML-QL queries into equivalent SQL statements that can be executed on the relational schema in the warehouse; it converts a relational query result into an XML document that exhibits the same structure as specified by the original XML-QL query; finally, it maintains the contents of the warehouse in light of updates to the sources as well as to the query mix of the IWIZ users. In order to understand how the warehouse manager maintains the contents of the warehouse, we need to briefly explain the sequence of events that occur when a user query is submitted to IWIZ. The query is forwarded to the warehouse manager, which analyzes whether or not the requested data are in the warehouse, and if so, whether the contents are up-to-date. To determine if the query can be satisfied by the warehouse, we use results from query containment theory. To determine whether the contents are up-to-date, we use a pre-defined consistency threshold, which specifies the time interval for which a result is valid. If the query cannot be satisfied in by the warehouse, it is sent to the mediator, which retrieves the data from the sources. In the latter case, the warehouse manager generates one or more maintenance queries to update the warehouse contents. Note, since the warehouse schema and the global IWIZ schema have a different structure, the original user query cannot be used to maintain the warehouse. Converting data and queries between the hierarchical graph model of XML and the flat structure of the relational model is not straightforward. The warehouse manager uses novel techniques for preserving the hierarchical structure of XML when storing and retrieving XML documents to and from the warehouse. Details regarding the warehouse manager and its implementation can be found in [10, 15]. 3. Case Study: Integrating Bibliography Data in IWIZ In order to demonstrate how the IWIZ mediator works, we describe a simple integration scenario involving multiple bibliography sources. Our global schema, which is represented in the form of a DTD,

contains terms such as article, author, book, editor, proceedings, title, year, etc. A snapshot of the DTD is shown in Figure 2. The +,? and * symbols indicate XML element constraints which refer to one-or-more, zero-or-one, and zero-or-more, respectively. For example, ontology, which is the root element for the schema, may contain zero-or-more bib elements, which in turn may contain zero-or-more bibliographical objects such as article, book, booklet, and so on. The symbol #PCDATA means that the element in the corresponding XML document must have a value. <!ELEMENT ontology (bib)*> <!ELEMENT bib (article book booklet...)*> <!ELEMENT article ( author+, title, year, month?, pages?, note?, journal )> <!ELEMENT address (#PCDATA)> <!ELEMENT author (firstname?,lastname,address?)> <!ELEMENT firstname (#PCDATA)> <!ELEMENT lastname (#PCDATA)> <!ELEMENT journal (title, year?, month?, volume?, number? )> <!ELEMENT month (#PCDATA) > <!ELEMENT note (#PCDATA) > <!ELEMENT number (#PCDATA) > <!ELEMENT title (#PCDATA) > <!ELEMENT type (#PCDATA) > <!ELEMENT volume (#PCDATA) > <!ELEMENT year (#PCDATA) > Figure 2: Sample DTD describing the structure of the concept article in an ontology of bibliography domain The global schema is used by all three control components: by the warehouse manager to create the relational schema for storing query results, by the wrappers as a target schema during the restructuring of query results, and by the mediator when merging and reconciling the source results. In our case study, we consider eight overlapping sources, each capable of providing some of the data for the concepts in the global schema. The current implementation of IWIZ supports three different types of queries with varying degrees of complexity: a simple query, which contains no nesting, no projections and no joins; a projection query, which contains one or more conditions on a particular concept; and a join query, which contains join conditions between two more concepts. More complex types of queries such as nested or group-by queries will be supported in the next version of IWIZ. Because of space limitations, we only demonstrate the mediation of a simple query. function query() { WHERE <ontology> <bib> <article> <title> <PCDATA> $title </PCDATA> </title> </article> </bib> </ontology> IN IWIZ CONSTRUCT <article_in_iwiz> $title </article_in_iwiz> } Figure 3: Simple XML-QL query produced by QBI Figure 3 shows a sample XML-QL query that retrieves article titles (as bound by the path expression: <ontology><bib><article><title><pcdata > $title...) from the IWIZ system and also specifies the format of the result (as defined in the CONSTRUCT clause). The IN clause, which usually specifies the name of the XML document on which the query is to be executed, indicates that the answer is to be retrieved from the IWIZ system. The query is submitted to the warehouse manager, which determines whether the desired titles exist in the warehouse. Here we assume that the requested data must be fetched from the sources to demonstrate the mediation process. The mediator creates a query plan and a set of subqueries against those sources that contain articles and their titles. Note that since not all source results are complete, it is the job of the mediator to merge the data into a complete result, which may not always be possible. This is accomplished by one or more join queries, which are executed against the partial source results to produce a single answer. In our simple case, only one join query is necessary. Note that creating the query plan requires significant knowledge about the contents and query capabilities of the sources/wrappers. Finally, the mediator reconciles any remaining discrepancies in the integrated result using the clustering module to detect duplicates and to resolve inconsistencies among related data items [14]. <?xml version= 1.0?> <!DOCTYPE QueryPlan SYSTEM QueryPlan.dtd > <QueryPlan uquid="0001" forelement= ontology.bib.article.title > <ExecutionTree queryprocessor= xmlql.cmd queryfilename= 0001.et1.xmlql /> </QueryPlan> Figure 4a: Sample query plan

/* 0001.et1.xmlql */ function query() { WHERE <ontology><bib><article><title> <PCDATA>$med_Title1</> </></></></> IN source1.xml, <ontology><bib><article><title> <PCDATA>$med_Title2</> </></></></> IN source2.xml, $med_title1 = $med_title2 CONSTRUCT <ontology><bib><article><title> <PCDATA>$med_Title1</> </></></></> } Figure 4b: Join query that is referenced in the query plan in Figure 4a Figure 4a shows a sample query plan and its execution tree. The execution tree includes a reference to the query file containing the join query shown in Figure 4b, as well as a reference to the query processor on which to execute 2. In the current version, the mediator invokes the XML-QL processor and executes the XML-QL join query. The final answer, which is returned to the user, is shown in Figure 5. Note that the schema of the result is consistent with the CONSTRUCT clause (i.e., the requested user view) in Figure 3. <?xml version="1.0" encoding="utf-8"?> <XML ID="1.whr.genoid_0"> <article_in_iwiz> <title>superviews: Virtual Integration of Multiple Databases</title> </article_in_iwiz> <article_in_iwiz> <title>optimization by Simulated Annealing</title> </article_in_iwiz> : : </XML> Figure 5: Snapshot of query result returned to the user 4. Conclusion and Future Research We have introduced a solution to data integration, which allows end-users to access and retrieve information from multiple sources through a consistent, integrated 2 This allows us to easily link in a newer version of the same or different query processor without recompilation. view. IWIZ uses a combined data warehousing-mediation approach for enhanced query performance and increased reliability. Specifically, it uses novel wrapper and mediation technologies to reduce human involvement as much as possible. Given the popularity of the Web, IWIZ is designed to integrate sources that provide XML-based, semistructured data. In the future, we plan to extend the system to support additional source data models, including unstructured sources. Within IWIZ, data is represented as XML documents whose schema is defined by a DTD. Given the rapid evolution of XML and its related technologies, our next version of the prototype will move towards XML Schema for its ability to represent a richer set of data modeling constructs. Other plans include the support of more complex queries in the mediator and more sophisticated warehouse maintenance procedures, which rely on source monitors for determining when the warehouse needs updating rather than on user-defined refresh times. We will continue report on our progress in future conferences and workshops. 5. References [1] S. Chawathe, H. Garcia-Molina, J. Hammer, K. Ireland, Y. Papakonstantinou, J. Ullman, and J. Widom, The TSIMMIS Project: Integration of Heterogeneous Information Sources, presented at The 10th Anniversary Meeting of the Information Processing Society of Japan, Tokyo, Japan, 1994. [2] W. W. Cohen, The WHIRL Approach to Data Integration, IEEE Intelligent Systems, vol. 13, pp. 20-24, 1998. [3] A. Deutch, M. Fernandez, D. Florescu, A. Levy, and D. Suciu, A Query Language for XML, presented at Proceedings of the 8th International World Wide Web Conference (WWW8), Toronto, Canada, 1999. [4] K. Geihs, Middleware Challenges Ahead, in IEEE Computer, vol. 34, 2001, pp. 24-31. [5] M. R. Genesereth, A. M. Keller, and O. M. Duschka, Infomaster: An Information Integration System, presented at Proceedings of the 1997 ACM SIGMOD International Conference on Management of Data, Tucson, Arizona, 1997. [6] T. R. Gruber, A Translation Approach to Portable Ontologies, Knowledge Acquisition, vol. 5, pp. 199-220, 1993. [7] H. Gu, Designing and Implementing a DTD Inference Engine for the IWIZ Project, in Computer and Information Science and Engineering Department. Gainesville: University of Florida, 2000, pp. 76. [8] J. Hammer, M. Breunig, and H. Garcia-Molina, Template-Based Wrappers in the TSIMMIS System, presented at The 23rd ACM SIGMOD International Conference on Management of Data, Tucson, Arizona, 1997.

[9] W. H. Inmon and C. Kelley, Rdb/VMS: Developing the Data Warehouse. Boston, London, Toronto: QED Publishing Group, 1993. [10] R. Kanna, Managing XML Data in a Relational Warehouse : On Query Translation, Warehouse Maintenance, and Data Staleness, in Computer and Information Science and Engineering Department. Gainesville: University of Florida, 2001. [11] W. Kent, "The Many Forms of a Single Fact," presented at Proceedings of the IEEE Spring Compcon, San Francisco, CA,1989. [12] A. Levy, The Information Manifold Approach to Data Integration, IEEE Intelligent Systems, vol. 13, pp. 12-16, 1998. [13] J. McHugh, S. Abiteboul, R. Goldman, D. Quass, and J. Widom, Lore: A Database Management System for Semistructured Data, SIGMOD Record, vol. 23, pp. 54-66, 1997. [14] C. Pluempitiwiriyawej, A New Hierarchical Clustering Model for Speeding Up the Reconciliation of XML- Based, Semistructured Data in Mediation Systems, in Computer and Information Science and Engineering Department. Gainesville: University of Florida, 2001, pp. 121. [15] R. Ramani, A Toolkit for Managing XML Data With a Relational Database Management System, in Computer and Information Science and Engineering Department. Gainesville: University of Florida, 2001, pp. 54. [16] A. Shah, Source Specific Query Rewriting and Query Plan Generation for Merging XML-Based Semistructured data in Mediation Systems, in Computer and Information Science and Engineering Department. Gainesville: University of Florida, 2001. [17] D. Suciu, Proceedings of the Workshop on Management of Semistructured Data. Tucson, AZ, 1997. [18] A. Teterovskaya, Conflict Detection and Resolution During Restructuring of XML Data, in Computer and Information Science and Engineering Department. Gainesville: University of Florida, 2000, pp. 113. [19] W3C, http://www.w3.org/: The World Wide Web Consortium (W3C). [20] G. Wiederhold, Mediators in the Architecture of Future Information Systems, in IEEE Computer, vol. 25. U.S. Naval Postgraduate School, Monterey, California, 1992, pp. 38-49.