A Composite Benchmark for Online Transaction Processing and Operational Reporting

Size: px
Start display at page:

Download "A Composite Benchmark for Online Transaction Processing and Operational Reporting"

Transcription

1 A Composite Benchmark for Online Transaction Processing and Operational Reporting Anja Bog, Jens Krüger, Jan Schaffner Hasso Plattner Institute, University of Potsdam August-Bebel-Str 88, Potsdam, Germany {anja.bog, jens.krueger, jan.schaffner Abstract Up-to-date data is of immense importance for operational reporting. Global enterprises require such a high throughput during daily operations that reporting systems had to be separated from the transactional system to avoid inhibiting performance. These architectures, however, do not provide the required reporting flexibility as the data set is a pre-defined subset of the actual data and updated only at certain time intervals, e.g. nightly. The composite benchmark for online transaction processing (OLTP) and operational reporting, henceforth CBTR, provides means to evaluate the performance of enterprise systems for a mixed workload of OLTP and operational reporting queries. Such a system offers up-to-date information and the flexibility of the entire data set for reporting. CBTR provokes the conflicts that were the reason for separating the two workloads on different systems. In this paper we introduce the concepts of CBTR, which is based on the original data set and real workloads of an existing, globally operating enterprise. 1. Introduction OLTP systems are the backbone of today s information systems supporting the daily operations[12]. All incoming requests and information of customers, suppliers and other business partners, as well as internal production processes have to be recorded, monitored and processed. Therefore, throughput is a key element. Online analytical processing (OLAP) systems support decision making processes and determining future strategies on the basis of analyzing data collected during daily operations. Inmon [8] distinguishes between two kinds of reporting: operational and informational. Operational reporting includes the most recent information within an OLTP system of an enterprise supporting the day-to-day activities in detail. Informational reporting supports strategic and long-term decision-making, whereas huge amounts of data is summarized. Operational reporting and OLTP are closely related in terms of what data set is used for processing. Since, both workloads interfere with each other it is common to provide a dedicated system for each. Specialized data stores, called operational data store (ODS)[7], support operational reporting and data warehouses [6] support informational reporting without impeding the performance of the OLTP system. Advances in hardware technology, e.g. the vast increase of CPU speed and memory sizes, as well as new trends in data storage management, e.g. column-oriented data store architectures, also called column stores, attract the idea that combining the OLTP workload with the operational reporting workload within the same system might become possible. As a contribution of this paper, we present a benchmark to measure the impact of a combined workload consisting of OLTP transactions and operational reporting queries. For this we introduce a scenario with an integrated data schema relevant for both. CBTR does not use generated data. It is entirely based on the data schema, transactions and operational reports taking place during the daily operations of a real enterprise. The benchmark will run with the actual data of the enterprise, though a data generator will be provided for the public version. CBTR also introduces two novel scales besides varying the data set size, i.e. altering the data schema and the mixture of the workload. The remainder of this paper is structured as follows: In Section 2 we introduce the methodology for creating a new benchmark out of the systems of an existing enterprise. Section 3 introduces the scenario and data schema of CBTR, as well as the transactions for OLTP and queries for operational reporting on top of this data schema. Furthermore, the problems arising from mixing the different workloads are discussed and we show how they are incorporated in CBTR. In Section 4 we examine the effects of using real company data of daily business operations for the benchmark instead of generated data and introduce the different

2 scales along which CBTR can be modified. Furthermore, existing database performance benchmarks for transaction processing and analytics are reviewed and their applicability to our use case of a combined workload is analyzed. Section 5 concludes the paper. 2. Methodology Instead of combining existing standardized benchmarks and adjusting these to obtain a mixed workload, we are creating an entirely new benchmark closely resembling a realworld scenario filled with real enterprise data. The first step toward a realistic benchmark was finding an enterprise sufficiently large to ensure the scaling of database size, e.g. starting from a small database consisting only of several gigabytes of data of one division scaling up to a data set worth several terabytes of all global divisions together. Second, from the variety of scenarios within the enterprise, e.g. order-to-cash management, procure-to-pay management, or supply chain planning, one is chosen for the benchmark. We decided to use the order-to-cash cycle as the basis for the benchmark. This scenario is equally important for daily operations, e.g. incoming orders, outgoing invoices, and finally incoming payments, as it is for operational reporting, e.g. analyzing the levels of order fulfillment, determining all open orders of customers to ensure timely deliveries, or finding late payments to trigger dunning. The next step is the extraction of the data schema for the benchmark from the enterprises OLTP system. In the beginning the schema and data is kept without changes, but cut cleanly out of the entire environment to achieve a closed setting. How the schema, and consequently the data, will evolve with the progressing of the benchmark is discussed in Section 4.2. Following the OLTP transactions on top of the orderto-cash data schema are extracted including the intertransaction dependencies, for example, an invoice can usually only be sent to the customer if all ordered items have been delivered. Therefore, the transactions recording the delivery of all the line items that will be invoiced should have happened before the transaction creating the invoice in the system. In the same way, the operational reporting queries relevant for the order-to-cash cycle will be extracted from the ODS, which is a part of the OLAP system of the enterprise. Since operational reporting needs data on a very detailed level, most suitably line item level, we found the queries using very similar structures as in the OLTP system. Using the data warehouse for such operational reports requiring line item data is not a preferred use case as data warehouses do not usually hold data on item level. Requiring data on line item granularity leads to an explosion in data set size as aggregates inherent to the data warehouse architecture have to be built for lots of combinations. 3. Designing the Benchmark OLTP and operational reporting are two fundamentally different workloads interfering with each other. In this section we propose the logical database schema, the OLTP transactions and operational reporting queries for the benchmark. We afterwards analyze the conflicts that are met when mixing transaction processing and operational reporting workloads. Then we show how these conflicts are built into the benchmark to achieve a full picture of the performance of an architecture processing both workloads in parallel Database Schema The database schema provided in this section will be the first notch on the scale of varying the underlying logical database schema from being optimized for OLTP to being optimized for OLAP as discussed in Section 4.2. The process underneath the order-to-cash scenario consists of several steps. These are the creation of a sales order, the delivery of ordered items, billing delivered items and clearing incoming payments. Therefore, the database schema consists of table sets belonging to one of these steps. Figure 1 depicts a very simplified overview of the schema showing only an excerpt of the most important tables and attributes. Figure 1. Order-to-cash database schema

3 Regarding the general structure, each set of tables contains a header and line item table, e.g. sales document header and sales document line item. The header table contains general information applicable to the entire set of line items belonging to it. Information stored in the sales document header is for example the order date, requested delivery date, and sold-to-party. A line item belongs to exactly one header and a header contains at least one line item. Specific data contained in the sales document line item table is the ordered material, quantity, allowed deviation from the ordered quantity, weight and volume information. Sales document partner may contain additional information about parties with specific roles for the line items. These roles are, e.g. sold-to-party, ship-to-party, payer, or vendor of a material. Consequently, if the ship-to-party is given on line item level, differing from the information given in the header, the header will be overridden for this case. The sales document business data table holds data for each line item only if a deviation from the data given in the header occurs. The other table sets, delivery tables, billing tables and financial tables show a similar behavior, which will not be discussed in the same detail. The different table sets are connected via their line items. The path of a line item can be traced from the order, over the delivery to the final payment. As a result, no one-to-one relationship exists between a sales order document and a delivery document. In fact, a delivery document may either span several sales order documents or may be only a partial delivery, meaning that the line items of one sales order document are split over several deliveries. This is also true for billing documents, where a customer may trigger several orders, but wants to pay after all have been delivered. The alternative, where a billing document covers only part of a delivery is rare, but also possible. Master data tables have been omitted in the overview of the data schema. These are customer detail and material detail tables. Further tables omitted in the overview are status tables. These hold information about the progress of the entire process The Transactions and Queries The transactional side of CBTR consists of the following transactions with write access: Creation of new sales order documents including line items, entering the customer details, ordered materials and quantities and negotiating a delivery date: Here checks have to be run if all selected materials, and the customer is maintained in the system. Creation of delivery documents based on sales orders: This transaction checks if all items to be delivered have been ordered beforehand and are sent to the same receiving party. Creation of billing documents: check if all referenced line items have been delivered before issuing the bill. Creating billing documents triggers the creation of according documents in the accounting system, where the items are marked as open. Open line items are those items that have been delivered, but not yet paid. Recording of payments and clearing open line items: This transaction includes pulling all open line items from the database and finding those that are referenced in an incoming payment document to clear them. Read-only OLTP transactions are also included, e.g. displaying a complete sales order document, delivery document, billing document, or accounting document, selecting by key, showing all not yet billed, but delivered items, and displaying customer and material details. The operational reporting queries include the following: Two queries for ensuring customer satisfaction, that determine the entire order processing time starting from the incoming order till the last item has been delivered to the customer, and the percentage of sales orders having been shipped completely and on-time. Determine the average time between the delivery of items until the payment is recorded, also known as days sales outstanding. The daily flash query identifies today s sales revenues, grouped, e.g. by customer, product, or region. Determine the liquidity based on payment due dates and clearing time of past payments. This can be done per customer or over all customers. To achieve a realistic mixture of the workload, the detailed interactions in the systems of the enterprise have to be analyzed and ported to CBTR. This includes statistics for each transaction and query, such as how many orders are created within a certain division in average per minute, and how many users are working on the system in parallel. Most systems provide such capabilities that track user interactions, database transactions, and system loads, which can be exploited here. This, however, is still future work Mixing the Workload French [5] gives a characterization of the different workloads of OLTP and reporting. OLTP transactions are characterized by simple, mixed read and write operations. These are usually row-based, retrieving a large number of columns. For example, an entire sales order document including all its attributes and line items with details is displayed or updated. OLTP transactions are furthermore highly selective, meaning that exactly one sales order document and its line item entries are touched, instead of a huge set of them. Reporting queries on the other hand are mainly readonly, complex and the data sets that are touched by a query

4 are large. In relation to OLTP transactions, they are relatively long-running due to their low selectivity and only a small number of columns is usually retrieved. For example, a query computing the average number of line items in a sales order only needs the identifier column of the sales order table and the identifier and line item position columns of the sales order line item table. Several conflicts arise from the different behaviors of both workloads. One of them is resource contention. Due to the difference in running times, OLTP transactions might queue up while a reporting query is processed. A solution to this problem is given by emerging multi-core and multiprocessor architectures, which are henceforth widely available. If a reporting query reads all the values of a column in a table, OLTP transactions trying to update single values or add new records are blocked for writing and have to wait until the reading query has released its locks on the column. Inserts during reporting queries are also an issue that has to be taken care of. This can either be done through employing snapshot isolation or relaxing the isolation level of reporting queries to non-repeatable read, which causes no trouble as reporting queries are read-only. Both conflicts are going to occur in the benchmark inherently as we are operating on the same data set. Consequently, they have to be solved by the architecture under test. OLAP queries directly on top of traditional OLTP row store implementations have been proven to be slow, as it includes lots of join operations and scans of the entire table. Not just the column of interest are read, but all other columns, too, since row stores usually operate on the granularity of entire tuples [4]. This issue has been targeted from two sides: First, optimizations have been developed restructuring the logical data schema, i.e. introducing the star and snowflake schema. These ensure a lower read overhead for high volume reporting queries. The benchmark will tackle these optimizations in future versions with the goal of figuring out a schema for a combined workload with the least overhead for both sides. The latest trend of optimization for a specific workload is going away from the traditional relational implementations. The physical layout of data is changed and the data access engine is adapted accordingly, like has been shown in the C-Store project [11]. This, however, is transparent for the benchmark. 4. Unique Properties of CBTR The properties contributing to the uniqueness of CBTR, justifying its existence complementary to standardized benchmarks, are discussed in this chapter. The first is, that CBTR is entirely based on real data and the second is that novel scales, besides data set size are introduced Real Data CBTR is not just defined closely resembling the behavior of original systems of an existing enterprise, but actually operates with real data instead of generated data. Most data generators consider statistical distributions of data in real enterprises. These, however, still lead to idealistic data as real data usually deviates in one way or the other from statistical distributions. Therefore, generated data sets never achieve the same behavior as really operating with original data. The best way of getting the real picture is still testing within the enterprise, which however is not possible, due to high availability requirements and a costly implementation of the benchmark activity. Consequently, CBTR, taking a snapshot of real systems, is as close to a real scenario as possible. Since no data will be generated, the original data set has to be split in two parts. One is loaded into the database upfront and the other set is used to feed the OLTP transactions with real data to keep the entire data set realistic Three Scales As mentioned before, CBTR introduces three scales to focus on different aspects. The first is the commonly used one for varying the data set size. The special case here is that scaling up is not as easy as generating a new data set with more branches and customers. Rather, the original data set has to be cut along its business boundaries, e.g. by division. Starting with one division and adding more divisions going along will increase the entire size of the database. Divisions are, however, not equal in size. As a result the scaling along divisions may not be done in a linear fashion. The second scale is the variation of the logical database schema from being optimized for transaction processing to being optimized for reporting. Currently, the two ends of this scale are clear. The OLTP-optimized schema was described earlier in this paper in Section 3.1 and the end of this scale resulting in a star or snowflake schema optimized for the reporting queries is also conceivable. It can be copied out of the ODS in the enterprises data warehouse system. One constraint, which has to be ensured is that all OLTP transactions of CBTR are still feasible on this schema. Future work here includes to find characteristics to achieve meaningful alterations of one schema toward the other, and as a result adapting the data set without loosing its authenticity. The last scale is a variation of the workloads starting from pure OLTP, going over the realistic mixed workload, which has been observed, and ending in a pure reporting workload. Applying the pure OLTP workload on the OLTP schema and doing the same for the reporting workload, will result in a baseline to compare the mixed workload with to

5 measure the performance losses. A side effect of CBTR is the creation of a data schema, which is applicable for the mixed workload without much performance loss compared to the baselines and the validation of this schema Related Work For each, OLTP and OLAP, benchmarks exist to validate the performance of new architectures. The transaction processing performance council (TPC) 1 benchmarks have become the standard benchmarks for OLTP and OLAP. TPC s currently active benchmarks are TPC-C[1] and its successor TPC-E[2] for OLTP and TPC-H[3] for OLAP. A new benchmark for OLAP, TPC-DS[10], is now in the review phase. These standard benchmarks could be applied to a combined architecture for OLTP and operational reporting, simply running them in parallel, but this would only lead to a partial picture of the actual performance of such a system measuring only the effects of hardware resource contention. The reason for this is that the underlying data schemes of the different benchmarks differ to a great extent and cannot be integrated to create a sufficiently relevant and still realistic data schema for both. Data access conflicts are, however, characteristic for a combined workload. Another issue of the standard benchmarks is that they still do not reflect the actual schemes used for operations, but an ideal version. According to Othayoth and Poess[9] real schemes suffer from a considerably larger number of tables and columns. In CBTR, the table set is cut down, but the number of columns in the tables is quite large, varying between 50 to 300 columns. Furthermore, Windsor et al [13] claim that workloads in production exhibit a wider range of behavior, than is reflected by the TPC benchmarks. Therefore, they are not representative of real workloads. 5. Conclusion In this paper we introduced the conceptual design of a new benchmark combining transaction processing and operational reporting. The detailed specification will be reported separately. We have argued why such a benchmark is unique and complements the existing portfolio of standardized benchmarks. In contrast to all of them, this benchmark is entirely based on realistic data and targets architectures capable of performing mixed workloads that have been separated onto different systems in the past. A combined system will be a huge breakthrough for globally operating enterprises. We intend to use CBTR to compare such a system with traditional OLTP and OLAP systems. Future work is still needed in specifying details of the benchmark, e.g. completing the scale varying the data 1 schemes, drawing out the OLAP optimized schema, and defining the ones in-between. Furthermore, the exact workload characteristics have to be analyzed in the real systems and applied in CBTR. References [1] TPC Benchmark C, Standard Specification, Revision 5.9. Technical report, Transaction Processing Performance Council, June [2] TPC Benchmark E, Standard Specification, Version Technical report, Transaction Processing Performance Council, April [3] TPC Benchmark H (Decision Support), Standard Specification, Revision Technical report, Transaction Processing Performance Council, February [4] M. M. Astrahan, M. W. Blasgen, D. D. Chamberlin, K. P. Eswaran, J. N. Gray, P. P. Griffiths, W. F. King, R. A. Lorie, P. R. McJones, J. W. Mehl, G. R. Putzolu, I. L. Traiger, B. W. Wade, and V. Watson. System R: Relational Approach to Database Management. ACM Trans. Database Syst., 1(2):97 137, [5] C. D. French. Teaching an OLTP Database Kernel Advanced Data Warehousing Techniques. In ICDE 97: Proceedings of the Thirteenth International Conference on Data Engineering, pages , Washington, DC, USA, IEEE Computer Society. [6] W. H. Inmon. Building the Data Warehouse (2nd ed.). John Wiley & Sons, Inc., New York, NY, USA, [7] W. H. Inmon. Building the Operational Data Store. John Wiley & Sons, Inc., New York, NY, USA, [8] W. H. Inmon. Operational and Informational Reporting: Information Management: Charting the Course. DM Review Magazine, July, [9] R. Othayoth and M. Poess. The Making of TPC-DS. In VLDB 06: Proceedings of the 32nd International Conference on Very Large Databases, pages VLDB Endowment, [10] M. Poess, B. Smith, L. Kollar, and P. Larson. TPC-DS, Taking Decision Support Benchmarking to the next Level. In SIGMOD 02: Proceedings of the 2002 ACM SIGMOD International Conference on Management of Data, pages , New York, NY, USA, ACM. [11] M. Stonebraker, D. J. Abadi, A. Batkin, X. Chen, M. Cherniack, M. Ferreira, E. Lau, A. Lin, S. Madden, E. O Neil, P. O Neil, A. Rasin, N. Tran, and S. Zdonik. C-store: A Column-oriented DBMS. In VLDB 05: Proceedings of the 31st International Conference on Very large Databases, pages VLDB Endowment, [12] M. Vieira and H. Madeira. A Dependability Benchmark for OLTP Application Environments. In VLDB 03: Proceedings of the 29th International Conference on Very Large Databases, pages VLDB Endowment, [13] Windsor W. Hsu and Alan J Smith and Honesty C. Young. Analysis of the Characteristics of Production Database Workloads and Comparison with the TPC Benchmarks. Technical report, Berkeley, CA, USA, 1999.

HYRISE In-Memory Storage Engine

HYRISE In-Memory Storage Engine HYRISE In-Memory Storage Engine Martin Grund 1, Jens Krueger 1, Philippe Cudre-Mauroux 3, Samuel Madden 2 Alexander Zeier 1, Hasso Plattner 1 1 Hasso-Plattner-Institute, Germany 2 MIT CSAIL, USA 3 University

More information

Data Modeling and Databases Ch 7: Schemas. Gustavo Alonso, Ce Zhang Systems Group Department of Computer Science ETH Zürich

Data Modeling and Databases Ch 7: Schemas. Gustavo Alonso, Ce Zhang Systems Group Department of Computer Science ETH Zürich Data Modeling and Databases Ch 7: Schemas Gustavo Alonso, Ce Zhang Systems Group Department of Computer Science ETH Zürich Database schema A Database Schema captures: The concepts represented Their attributes

More information

Rocky Mountain Technology Ventures

Rocky Mountain Technology Ventures Rocky Mountain Technology Ventures Comparing and Contrasting Online Analytical Processing (OLAP) and Online Transactional Processing (OLTP) Architectures 3/19/2006 Introduction One of the most important

More information

Operational Reporting Using Navigational SQL

Operational Reporting Using Navigational SQL Operational Reporting Using Navigational SQL Martin Grund, Jens Krueger, Jan Schaffner, Matthieu Schapranow, Anja Bog Hasso-Plattner-Institut, University of Potsdam August-Bebel-Str 88, 14482 Potsdam,

More information

Query Processing on Multi-Core Architectures

Query Processing on Multi-Core Architectures Query Processing on Multi-Core Architectures Frank Huber and Johann-Christoph Freytag Department for Computer Science, Humboldt-Universität zu Berlin Rudower Chaussee 25, 12489 Berlin, Germany {huber,freytag}@dbis.informatik.hu-berlin.de

More information

Data Warehouses Chapter 12. Class 10: Data Warehouses 1

Data Warehouses Chapter 12. Class 10: Data Warehouses 1 Data Warehouses Chapter 12 Class 10: Data Warehouses 1 OLTP vs OLAP Operational Database: a database designed to support the day today transactions of an organization Data Warehouse: historical data is

More information

A Comparison of Memory Usage and CPU Utilization in Column-Based Database Architecture vs. Row-Based Database Architecture

A Comparison of Memory Usage and CPU Utilization in Column-Based Database Architecture vs. Row-Based Database Architecture A Comparison of Memory Usage and CPU Utilization in Column-Based Database Architecture vs. Row-Based Database Architecture By Gaurav Sheoran 9-Dec-08 Abstract Most of the current enterprise data-warehouses

More information

Development of an interface that allows MDX based data warehouse queries by less experienced users

Development of an interface that allows MDX based data warehouse queries by less experienced users Development of an interface that allows MDX based data warehouse queries by less experienced users Mariana Duprat André Monat Escola Superior de Desenho Industrial 400 Introduction Data analysis is a fundamental

More information

FAQ: Relational Databases in Accounting Systems

FAQ: Relational Databases in Accounting Systems Question 1: What is the definition of a schema as it relates to a database? What are the three levels? Answer 1: A schema describes the logical structure of a database. The three levels of schemas are

More information

In-Memory Data Management

In-Memory Data Management In-Memory Data Management Martin Faust Research Assistant Research Group of Prof. Hasso Plattner Hasso Plattner Institute for Software Engineering University of Potsdam Agenda 2 1. Changed Hardware 2.

More information

C-Store: A column-oriented DBMS

C-Store: A column-oriented DBMS Presented by: Manoj Karthick Selva Kumar C-Store: A column-oriented DBMS MIT CSAIL, Brandeis University, UMass Boston, Brown University Proceedings of the 31 st VLDB Conference, Trondheim, Norway 2005

More information

Data Warehouse and Data Mining

Data Warehouse and Data Mining Data Warehouse and Data Mining Lecture No. 07 Terminologies Naeem Ahmed Email: naeemmahoto@gmail.com Department of Software Engineering Mehran Univeristy of Engineering and Technology Jamshoro Database

More information

A Novel Approach of Data Warehouse OLTP and OLAP Technology for Supporting Management prospective

A Novel Approach of Data Warehouse OLTP and OLAP Technology for Supporting Management prospective A Novel Approach of Data Warehouse OLTP and OLAP Technology for Supporting Management prospective B.Manivannan Research Scholar, Dept. Computer Science, Dravidian University, Kuppam, Andhra Pradesh, India

More information

DATA WAREHOUSING IN LIBRARIES FOR MANAGING DATABASE

DATA WAREHOUSING IN LIBRARIES FOR MANAGING DATABASE DATA WAREHOUSING IN LIBRARIES FOR MANAGING DATABASE Dr. Kirti Singh, Librarian, SSD Women s Institute of Technology, Bathinda Abstract: Major libraries have large collections and circulation. Managing

More information

Data warehouse architecture consists of the following interconnected layers:

Data warehouse architecture consists of the following interconnected layers: Architecture, in the Data warehousing world, is the concept and design of the data base and technologies that are used to load the data. A good architecture will enable scalability, high performance and

More information

This tutorial will help computer science graduates to understand the basic-to-advanced concepts related to data warehousing.

This tutorial will help computer science graduates to understand the basic-to-advanced concepts related to data warehousing. About the Tutorial A data warehouse is constructed by integrating data from multiple heterogeneous sources. It supports analytical reporting, structured and/or ad hoc queries and decision making. This

More information

In-Memory Data Management Jens Krueger

In-Memory Data Management Jens Krueger In-Memory Data Management Jens Krueger Enterprise Platform and Integration Concepts Hasso Plattner Intitute OLTP vs. OLAP 2 Online Transaction Processing (OLTP) Organized in rows Online Analytical Processing

More information

HyPer-sonic Combined Transaction AND Query Processing

HyPer-sonic Combined Transaction AND Query Processing HyPer-sonic Combined Transaction AND Query Processing Thomas Neumann Technische Universität München December 2, 2011 Motivation There are different scenarios for database usage: OLTP: Online Transaction

More information

The strategic advantage of OLAP and multidimensional analysis

The strategic advantage of OLAP and multidimensional analysis IBM Software Business Analytics Cognos Enterprise The strategic advantage of OLAP and multidimensional analysis 2 The strategic advantage of OLAP and multidimensional analysis Overview Online analytical

More information

A Common Database Approach for OLTP and OLAP Using an In-Memory Column Database

A Common Database Approach for OLTP and OLAP Using an In-Memory Column Database A Common Database Approach for OLTP and OLAP Using an In-Memory Column Database Hasso Plattner Hasso Plattner Institute for IT Systems Engineering University of Potsdam Prof.-Dr.-Helmert-Str. 2-3 14482

More information

Account Payables Dimension and Fact Job Aid

Account Payables Dimension and Fact Job Aid Contents Introduction... 2 Financials AP Overview Subject Area:... 10 Financials AP Holds Subject Area:... 13 Financials AP Voucher Accounting Subject Area:... 15 Financials AP Voucher Line Distrib Details

More information

Designing Data Warehouses. Data Warehousing Design. Designing Data Warehouses. Designing Data Warehouses

Designing Data Warehouses. Data Warehousing Design. Designing Data Warehouses. Designing Data Warehouses Designing Data Warehouses To begin a data warehouse project, need to find answers for questions such as: Data Warehousing Design Which user requirements are most important and which data should be considered

More information

An Oracle White Paper June Exadata Hybrid Columnar Compression (EHCC)

An Oracle White Paper June Exadata Hybrid Columnar Compression (EHCC) An Oracle White Paper June 2011 (EHCC) Introduction... 3 : Technology Overview... 4 Warehouse Compression... 6 Archive Compression... 7 Conclusion... 9 Introduction enables the highest levels of data compression

More information

An In-Depth Analysis of Data Aggregation Cost Factors in a Columnar In-Memory Database

An In-Depth Analysis of Data Aggregation Cost Factors in a Columnar In-Memory Database An In-Depth Analysis of Data Aggregation Cost Factors in a Columnar In-Memory Database Stephan Müller, Hasso Plattner Enterprise Platform and Integration Concepts Hasso Plattner Institute, Potsdam (Germany)

More information

Novel Materialized View Selection in a Multidimensional Database

Novel Materialized View Selection in a Multidimensional Database Graphic Era University From the SelectedWorks of vijay singh Winter February 10, 2009 Novel Materialized View Selection in a Multidimensional Database vijay singh Available at: https://works.bepress.com/vijaysingh/5/

More information

The Evolution of Data Warehousing. Data Warehousing Concepts. The Evolution of Data Warehousing. The Evolution of Data Warehousing

The Evolution of Data Warehousing. Data Warehousing Concepts. The Evolution of Data Warehousing. The Evolution of Data Warehousing The Evolution of Data Warehousing Data Warehousing Concepts Since 1970s, organizations gained competitive advantage through systems that automate business processes to offer more efficient and cost-effective

More information

Hyrise - a Main Memory Hybrid Storage Engine

Hyrise - a Main Memory Hybrid Storage Engine Hyrise - a Main Memory Hybrid Storage Engine Philippe Cudré-Mauroux exascale Infolab U. of Fribourg - Switzerland & MIT joint work w/ Martin Grund, Jens Krueger, Hasso Plattner, Alexander Zeier (HPI) and

More information

A Data warehouse within a Federated database architecture

A Data warehouse within a Federated database architecture Association for Information Systems AIS Electronic Library (AISeL) AMCIS 1997 Proceedings Americas Conference on Information Systems (AMCIS) 8-15-1997 A Data warehouse within a Federated database architecture

More information

CSE 544 Principles of Database Management Systems. Alvin Cheung Fall 2015 Lecture 8 - Data Warehousing and Column Stores

CSE 544 Principles of Database Management Systems. Alvin Cheung Fall 2015 Lecture 8 - Data Warehousing and Column Stores CSE 544 Principles of Database Management Systems Alvin Cheung Fall 2015 Lecture 8 - Data Warehousing and Column Stores Announcements Shumo office hours change See website for details HW2 due next Thurs

More information

Fig 1.2: Relationship between DW, ODS and OLTP Systems

Fig 1.2: Relationship between DW, ODS and OLTP Systems 1.4 DATA WAREHOUSES Data warehousing is a process for assembling and managing data from various sources for the purpose of gaining a single detailed view of an enterprise. Although there are several definitions

More information

DATA WAREHOUSING II. CS121: Relational Databases Fall 2017 Lecture 23

DATA WAREHOUSING II. CS121: Relational Databases Fall 2017 Lecture 23 DATA WAREHOUSING II CS121: Relational Databases Fall 2017 Lecture 23 Last Time: Data Warehousing 2 Last time introduced the topic of decision support systems (DSS) and data warehousing Very large DBs used

More information

Column-Stores vs. Row-Stores: How Different Are They Really?

Column-Stores vs. Row-Stores: How Different Are They Really? Column-Stores vs. Row-Stores: How Different Are They Really? Daniel Abadi, Samuel Madden, Nabil Hachem Presented by Guozhang Wang November 18 th, 2008 Several slides are from Daniel Abadi and Michael Stonebraker

More information

Column Stores vs. Row Stores How Different Are They Really?

Column Stores vs. Row Stores How Different Are They Really? Column Stores vs. Row Stores How Different Are They Really? Daniel J. Abadi (Yale) Samuel R. Madden (MIT) Nabil Hachem (AvantGarde) Presented By : Kanika Nagpal OUTLINE Introduction Motivation Background

More information

Data warehousing in telecom Industry

Data warehousing in telecom Industry Data warehousing in telecom Industry Dr. Sanjay Srivastava, Kaushal Srivastava, Avinash Pandey, Akhil Sharma Abstract: Data Warehouse is termed as the storage for the large heterogeneous data collected

More information

Performance evaluation and benchmarking of DBMSs. INF5100 Autumn 2009 Jarle Søberg

Performance evaluation and benchmarking of DBMSs. INF5100 Autumn 2009 Jarle Søberg Performance evaluation and benchmarking of DBMSs INF5100 Autumn 2009 Jarle Søberg Overview What is performance evaluation and benchmarking? Theory Examples Domain-specific benchmarks and benchmarking DBMSs

More information

Data Warehousing and OLAP Technologies for Decision-Making Process

Data Warehousing and OLAP Technologies for Decision-Making Process Data Warehousing and OLAP Technologies for Decision-Making Process Hiren H Darji Asst. Prof in Anand Institute of Information Science,Anand Abstract Data warehousing and on-line analytical processing (OLAP)

More information

Data Warehousing and Decision Support

Data Warehousing and Decision Support Data Warehousing and Decision Support Chapter 23, Part A Database Management Systems, 2 nd Edition. R. Ramakrishnan and J. Gehrke 1 Introduction Increasingly, organizations are analyzing current and historical

More information

Impact of Column-oriented Databases on Data Mining Algorithms

Impact of Column-oriented Databases on Data Mining Algorithms Impact of Column-oriented Databases on Data Mining Algorithms Prof. R. G. Mehta 1, Dr. N.J. Mistry, Dr. M. Raghuvanshi 3 Associate Professor, Computer Engineering Department, SV National Institute of Technology,

More information

1. Analytical queries on the dimensionally modeled database can be significantly simpler to create than on the equivalent nondimensional database.

1. Analytical queries on the dimensionally modeled database can be significantly simpler to create than on the equivalent nondimensional database. 1. Creating a data warehouse involves using the functionalities of database management software to implement the data warehouse model as a collection of physically created and mutually connected database

More information

The Effects of Virtualization on Main Memory Systems

The Effects of Virtualization on Main Memory Systems The Effects of Virtualization on Main Memory Systems Martin Grund, Jan Schaffner, Jens Krueger, Jan Brunnert, Alexander Zeier Hasso-Plattner-Institute at the University of Potsdam August-Bebel-Str. 88

More information

: How does DSS data differ from operational data?

: How does DSS data differ from operational data? by Daniel J Power Editor, DSSResources.com Decision support data used for analytics and data-driven DSS is related to past actions and intentions. The data is a historical record and the scale of data

More information

Strategic Briefing Paper Big Data

Strategic Briefing Paper Big Data Strategic Briefing Paper Big Data The promise of Big Data is improved competitiveness, reduced cost and minimized risk by taking better decisions. This requires affordable solution architectures which

More information

Data Warehousing and Decision Support. Introduction. Three Complementary Trends. [R&G] Chapter 23, Part A

Data Warehousing and Decision Support. Introduction. Three Complementary Trends. [R&G] Chapter 23, Part A Data Warehousing and Decision Support [R&G] Chapter 23, Part A CS 432 1 Introduction Increasingly, organizations are analyzing current and historical data to identify useful patterns and support business

More information

Data Warehouse and Data Mining

Data Warehouse and Data Mining Data Warehouse and Data Mining Lecture No. 02 Introduction to Data Warehouse Naeem Ahmed Email: naeemmahoto@gmail.com Department of Software Engineering Mehran Univeristy of Engineering and Technology

More information

On Object Orientation as a Paradigm for General Purpose. Distributed Operating Systems

On Object Orientation as a Paradigm for General Purpose. Distributed Operating Systems On Object Orientation as a Paradigm for General Purpose Distributed Operating Systems Vinny Cahill, Sean Baker, Brendan Tangney, Chris Horn and Neville Harris Distributed Systems Group, Dept. of Computer

More information

Syllabus. Syllabus. Motivation Decision Support. Syllabus

Syllabus. Syllabus. Motivation Decision Support. Syllabus Presentation: Sophia Discussion: Tianyu Metadata Requirements and Conclusion 3 4 Decision Support Decision Making: Everyday, Everywhere Decision Support System: a class of computerized information systems

More information

4th National Conference on Electrical, Electronics and Computer Engineering (NCEECE 2015)

4th National Conference on Electrical, Electronics and Computer Engineering (NCEECE 2015) 4th National Conference on Electrical, Electronics and Computer Engineering (NCEECE 2015) Benchmark Testing for Transwarp Inceptor A big data analysis system based on in-memory computing Mingang Chen1,2,a,

More information

Guide Users along Information Pathways and Surf through the Data

Guide Users along Information Pathways and Surf through the Data Guide Users along Information Pathways and Surf through the Data Stephen Overton, Overton Technologies, LLC, Raleigh, NC ABSTRACT Business information can be consumed many ways using the SAS Enterprise

More information

Data Mining Concepts & Techniques

Data Mining Concepts & Techniques Data Mining Concepts & Techniques Lecture No. 01 Databases, Data warehouse Naeem Ahmed Email: naeemmahoto@gmail.com Department of Software Engineering Mehran Univeristy of Engineering and Technology Jamshoro

More information

Data Warehousing and OLAP Technology for Primary Industry

Data Warehousing and OLAP Technology for Primary Industry Data Warehousing and OLAP Technology for Primary Industry Taehan Kim 1), Sang Chan Park 2) 1) Department of Industrial Engineering, KAIST (taehan@kaist.ac.kr) 2) Department of Industrial Engineering, KAIST

More information

Low Overhead Concurrency Control for Partitioned Main Memory Databases

Low Overhead Concurrency Control for Partitioned Main Memory Databases Low Overhead Concurrency Control for Partitioned Main Memory Databases Evan Jones, Daniel Abadi, Samuel Madden, June 2010, SIGMOD CS 848 May, 2016 Michael Abebe Background Motivations Database partitioning

More information

DATA WAREHOUSING DEVELOPING OPTIMIZED ALGORITHMS TO ENHANCE THE USABILITY OF SCHEMA IN DATA MINING AND ALLIED DATA INTELLIGENCE MODELS

DATA WAREHOUSING DEVELOPING OPTIMIZED ALGORITHMS TO ENHANCE THE USABILITY OF SCHEMA IN DATA MINING AND ALLIED DATA INTELLIGENCE MODELS DATA WAREHOUSING DEVELOPING OPTIMIZED ALGORITHMS TO ENHANCE THE USABILITY OF SCHEMA IN DATA MINING AND ALLIED DATA INTELLIGENCE MODELS Harshit Yadav Student, Bal Bharati Public School, Dwarka, New Delhi

More information

HyPer-sonic Combined Transaction AND Query Processing

HyPer-sonic Combined Transaction AND Query Processing HyPer-sonic Combined Transaction AND Query Processing Thomas Neumann Technische Universität München October 26, 2011 Motivation - OLTP vs. OLAP OLTP and OLAP have very different requirements OLTP high

More information

Adapting Mixed Workloads to Meet SLOs in Autonomic DBMSs

Adapting Mixed Workloads to Meet SLOs in Autonomic DBMSs Adapting Mixed Workloads to Meet SLOs in Autonomic DBMSs Baoning Niu, Patrick Martin, Wendy Powley School of Computing, Queen s University Kingston, Ontario, Canada, K7L 3N6 {niu martin wendy}@cs.queensu.ca

More information

COLUMN-STORES VS. ROW-STORES: HOW DIFFERENT ARE THEY REALLY? DANIEL J. ABADI (YALE) SAMUEL R. MADDEN (MIT) NABIL HACHEM (AVANTGARDE)

COLUMN-STORES VS. ROW-STORES: HOW DIFFERENT ARE THEY REALLY? DANIEL J. ABADI (YALE) SAMUEL R. MADDEN (MIT) NABIL HACHEM (AVANTGARDE) COLUMN-STORES VS. ROW-STORES: HOW DIFFERENT ARE THEY REALLY? DANIEL J. ABADI (YALE) SAMUEL R. MADDEN (MIT) NABIL HACHEM (AVANTGARDE) PRESENTATION BY PRANAV GOEL Introduction On analytical workloads, Column

More information

The mixed workload CH-BenCHmark. Hybrid y OLTP&OLAP Database Systems Real-Time Business Intelligence Analytical information at your fingertips

The mixed workload CH-BenCHmark. Hybrid y OLTP&OLAP Database Systems Real-Time Business Intelligence Analytical information at your fingertips The mixed workload CH-BenCHmark Hybrid y OLTP&OLAP Database Systems Real-Time Business Intelligence Analytical information at your fingertips Richard Cole (ParAccel), Florian Funke (TU München), Leo Giakoumakis

More information

DATA MINING AND WAREHOUSING

DATA MINING AND WAREHOUSING DATA MINING AND WAREHOUSING Qno Question Answer 1 Define data warehouse? Data warehouse is a subject oriented, integrated, time-variant, and nonvolatile collection of data that supports management's decision-making

More information

collection of data that is used primarily in organizational decision making.

collection of data that is used primarily in organizational decision making. Data Warehousing A data warehouse is a special purpose database. Classic databases are generally used to model some enterprise. Most often they are used to support transactions, a process that is referred

More information

Essentials for Modern Data Analysis Systems

Essentials for Modern Data Analysis Systems Essentials for Modern Data Analysis Systems Mehrdad Jahangiri, Cyrus Shahabi University of Southern California Los Angeles, CA 90089-0781 {jahangir, shahabi}@usc.edu Abstract Earth scientists need to perform

More information

Data Warehousing and Decision Support

Data Warehousing and Decision Support Data Warehousing and Decision Support [R&G] Chapter 23, Part A CS 4320 1 Introduction Increasingly, organizations are analyzing current and historical data to identify useful patterns and support business

More information

DATA MINING TRANSACTION

DATA MINING TRANSACTION DATA MINING Data Mining is the process of extracting patterns from data. Data mining is seen as an increasingly important tool by modern business to transform data into an informational advantage. It is

More information

Product Documentation SAP Business ByDesign August Analytics

Product Documentation SAP Business ByDesign August Analytics Product Documentation PUBLIC Analytics Table Of Contents 1 Analytics.... 5 2 Business Background... 6 2.1 Overview of Analytics... 6 2.2 Overview of Reports in SAP Business ByDesign... 12 2.3 Reports

More information

Evolution of Database Systems

Evolution of Database Systems Evolution of Database Systems Krzysztof Dembczyński Intelligent Decision Support Systems Laboratory (IDSS) Poznań University of Technology, Poland Intelligent Decision Support Systems Master studies, second

More information

Analyzing Memory Access Patterns and Optimizing Through Spatial Memory Streaming. Ogün HEPER CmpE 511 Computer Architecture December 24th, 2009

Analyzing Memory Access Patterns and Optimizing Through Spatial Memory Streaming. Ogün HEPER CmpE 511 Computer Architecture December 24th, 2009 Analyzing Memory Access Patterns and Optimizing Through Spatial Memory Streaming Ogün HEPER CmpE 511 Computer Architecture December 24th, 2009 Agenda Introduction Memory Hierarchy Design CPU Speed vs.

More information

How Achaeans Would Construct Columns in Troy. Alekh Jindal, Felix Martin Schuhknecht, Jens Dittrich, Karen Khachatryan, Alexander Bunte

How Achaeans Would Construct Columns in Troy. Alekh Jindal, Felix Martin Schuhknecht, Jens Dittrich, Karen Khachatryan, Alexander Bunte How Achaeans Would Construct Columns in Troy Alekh Jindal, Felix Martin Schuhknecht, Jens Dittrich, Karen Khachatryan, Alexander Bunte Number of Visas Received 1 0,75 0,5 0,25 0 Alekh Jens Health Level

More information

DC Area Business Objects Crystal User Group (DCABOCUG) Data Warehouse Architectures for Business Intelligence Reporting.

DC Area Business Objects Crystal User Group (DCABOCUG) Data Warehouse Architectures for Business Intelligence Reporting. DC Area Business Objects Crystal User Group (DCABOCUG) Data Warehouse Architectures for Business Intelligence Reporting April 14, 2009 Whitemarsh Information Systems Corporation 2008 Althea Lane Bowie,

More information

Performance comparison of in-memory and disk-based databases using transaction processing performance council (TPC) benchmarking

Performance comparison of in-memory and disk-based databases using transaction processing performance council (TPC) benchmarking Vol. 8(1), pp. 1-8, August 2018 DOI 10.5897/JIIS2018.0106 Article Number: D74EDF358447 ISSN: 2141-6478 Copyright 2018 Author(s) retain the copyright of this article http://www.academicjournals.org/jiis

More information

Correctness Criteria Beyond Serializability

Correctness Criteria Beyond Serializability Correctness Criteria Beyond Serializability Mourad Ouzzani Cyber Center, Purdue University http://www.cs.purdue.edu/homes/mourad/ Brahim Medjahed Department of Computer & Information Science, The University

More information

Outline. Managing Information Resources. Concepts and Definitions. Introduction. Chapter 7

Outline. Managing Information Resources. Concepts and Definitions. Introduction. Chapter 7 Outline Managing Information Resources Chapter 7 Introduction Managing Data The Three-Level Database Model Four Data Models Getting Corporate Data into Shape Managing Information Four Types of Information

More information

COGNOS (R) 8 GUIDELINES FOR MODELING METADATA FRAMEWORK MANAGER. Cognos(R) 8 Business Intelligence Readme Guidelines for Modeling Metadata

COGNOS (R) 8 GUIDELINES FOR MODELING METADATA FRAMEWORK MANAGER. Cognos(R) 8 Business Intelligence Readme Guidelines for Modeling Metadata COGNOS (R) 8 FRAMEWORK MANAGER GUIDELINES FOR MODELING METADATA Cognos(R) 8 Business Intelligence Readme Guidelines for Modeling Metadata GUIDELINES FOR MODELING METADATA THE NEXT LEVEL OF PERFORMANCE

More information

Data Warehouse and Mining

Data Warehouse and Mining Data Warehouse and Mining 1. is a subject-oriented, integrated, time-variant, nonvolatile collection of data in support of management decisions. A. Data Mining. B. Data Warehousing. C. Web Mining. D. Text

More information

Summary: Issues / Open Questions:

Summary: Issues / Open Questions: Summary: The paper introduces Transitional Locking II (TL2), a Software Transactional Memory (STM) algorithm, which tries to overcomes most of the safety and performance issues of former STM implementations.

More information

Data Warehouse. Asst.Prof.Dr. Pattarachai Lalitrojwong

Data Warehouse. Asst.Prof.Dr. Pattarachai Lalitrojwong Data Warehouse Asst.Prof.Dr. Pattarachai Lalitrojwong Faculty of Information Technology King Mongkut s Institute of Technology Ladkrabang Bangkok 10520 pattarachai@it.kmitl.ac.th The Evolution of Data

More information

An Overview of Cost-based Optimization of Queries with Aggregates

An Overview of Cost-based Optimization of Queries with Aggregates An Overview of Cost-based Optimization of Queries with Aggregates Surajit Chaudhuri Hewlett-Packard Laboratories 1501 Page Mill Road Palo Alto, CA 94304 chaudhuri@hpl.hp.com Kyuseok Shim IBM Almaden Research

More information

Performance evaluation and. INF5100 Autumn 2007 Jarle Søberg

Performance evaluation and. INF5100 Autumn 2007 Jarle Søberg Performance evaluation and benchmarking of DBMSs INF5100 Autumn 2007 Jarle Søberg Overview What is performance evaluation and benchmarking? Theory Examples Domain-specific benchmarks and benchmarking DBMSs

More information

In-Memory Columnar Databases - Hyper (November 2012)

In-Memory Columnar Databases - Hyper (November 2012) 1 In-Memory Columnar Databases - Hyper (November 2012) Arto Kärki, University of Helsinki, Helsinki, Finland, arto.karki@tieto.com Abstract Relational database systems are today the most common database

More information

QUERY RECOMMENDATION SYSTEM USING USERS QUERYING BEHAVIOR

QUERY RECOMMENDATION SYSTEM USING USERS QUERYING BEHAVIOR International Journal of Emerging Technology and Innovative Engineering QUERY RECOMMENDATION SYSTEM USING USERS QUERYING BEHAVIOR V.Megha Dept of Computer science and Engineering College Of Engineering

More information

This tutorial has been prepared for computer science graduates to help them understand the basic-to-advanced concepts related to data mining.

This tutorial has been prepared for computer science graduates to help them understand the basic-to-advanced concepts related to data mining. About the Tutorial Data Mining is defined as the procedure of extracting information from huge sets of data. In other words, we can say that data mining is mining knowledge from data. The tutorial starts

More information

Data Mining and Warehousing

Data Mining and Warehousing Data Mining and Warehousing Sangeetha K V I st MCA Adhiyamaan College of Engineering, Hosur-635109. E-mail:veerasangee1989@gmail.com Rajeshwari P I st MCA Adhiyamaan College of Engineering, Hosur-635109.

More information

Was ist dran an einer spezialisierten Data Warehousing platform?

Was ist dran an einer spezialisierten Data Warehousing platform? Was ist dran an einer spezialisierten Data Warehousing platform? Hermann Bär Oracle USA Redwood Shores, CA Schlüsselworte Data warehousing, Exadata, specialized hardware proprietary hardware Introduction

More information

Decision Support, Data Warehousing, and OLAP

Decision Support, Data Warehousing, and OLAP Decision Support, Data Warehousing, and OLAP : Contents Terminology : OLAP vs. OLTP Data Warehousing Architecture Technologies References 1 Decision Support and OLAP Information technology to help knowledge

More information

Evaluation of Keyword Search System with Ranking

Evaluation of Keyword Search System with Ranking Evaluation of Keyword Search System with Ranking P.Saranya, Dr.S.Babu UG Scholar, Department of CSE, Final Year, IFET College of Engineering, Villupuram, Tamil nadu, India Associate Professor, Department

More information

The Data Organization

The Data Organization C V I T F E P A O TM The Data Organization 1251 Yosemite Way Hayward, CA 94545 (510) 303-8868 rschoenrank@computer.org Business Intelligence Process Architecture By Rainer Schoenrank Data Warehouse Consultant

More information

Histogram-Aware Sorting for Enhanced Word-Aligned Compress

Histogram-Aware Sorting for Enhanced Word-Aligned Compress Histogram-Aware Sorting for Enhanced Word-Aligned Compression in Bitmap Indexes 1- University of New Brunswick, Saint John 2- Université du Québec at Montréal (UQAM) October 23, 2008 Bitmap indexes SELECT

More information

35 Database benchmarking 25/10/17 12:11 AM. Database benchmarking

35 Database benchmarking 25/10/17 12:11 AM. Database benchmarking Database benchmarking 1 Database benchmark? What is it? A database benchmark is a sample database and a group of database applications able to run on several different database systems in order to measure

More information

V Conclusions. V.1 Related work

V Conclusions. V.1 Related work V Conclusions V.1 Related work Even though MapReduce appears to be constructed specifically for performing group-by aggregations, there are also many interesting research work being done on studying critical

More information

STRATEGIC INFORMATION SYSTEMS IV STV401T / B BTIP05 / BTIX05 - BTECH DEPARTMENT OF INFORMATICS. By: Dr. Tendani J. Lavhengwa

STRATEGIC INFORMATION SYSTEMS IV STV401T / B BTIP05 / BTIX05 - BTECH DEPARTMENT OF INFORMATICS. By: Dr. Tendani J. Lavhengwa STRATEGIC INFORMATION SYSTEMS IV STV401T / B BTIP05 / BTIX05 - BTECH DEPARTMENT OF INFORMATICS LECTURE: 05 (A) DATA WAREHOUSING (DW) By: Dr. Tendani J. Lavhengwa lavhengwatj@tut.ac.za 1 My personal quote:

More information

Application software office packets, databases and data warehouses.

Application software office packets, databases and data warehouses. Introduction to Computer Systems (9) Application software office packets, databases and data warehouses. Piotr Mielecki Ph. D. http://www.wssk.wroc.pl/~mielecki piotr.mielecki@pwr.edu.pl pmielecki@gmail.com

More information

Performance of Multihop Communications Using Logical Topologies on Optical Torus Networks

Performance of Multihop Communications Using Logical Topologies on Optical Torus Networks Performance of Multihop Communications Using Logical Topologies on Optical Torus Networks X. Yuan, R. Melhem and R. Gupta Department of Computer Science University of Pittsburgh Pittsburgh, PA 156 fxyuan,

More information

Segregating Data Within Databases for Performance Prepared by Bill Hulsizer

Segregating Data Within Databases for Performance Prepared by Bill Hulsizer Segregating Data Within Databases for Performance Prepared by Bill Hulsizer When designing databases, segregating data within tables is usually important and sometimes very important. The higher the volume

More information

Analytics: Server Architect (Siebel 7.7)

Analytics: Server Architect (Siebel 7.7) Analytics: Server Architect (Siebel 7.7) Student Guide June 2005 Part # 10PO2-ASAS-07710 D44608GC10 Edition 1.0 D44917 Copyright 2005, 2006, Oracle. All rights reserved. Disclaimer This document contains

More information

Limiting the State Space Explosion as Taking Dynamic Issues into Account in Network Modelling and Analysis

Limiting the State Space Explosion as Taking Dynamic Issues into Account in Network Modelling and Analysis Limiting the State Space Explosion as Taking Dynamic Issues into Account in Network Modelling and Analysis Qitao Gan, Bjarne E. Helvik Centre for Quantifiable Quality of Service in Communication Systems,

More information

Managing test suites for services

Managing test suites for services Managing test suites for services Kathrin Kaschner Universität Rostock, Institut für Informatik, 18051 Rostock, Germany kathrin.kaschner@uni-rostock.de Abstract. When developing an existing service further,

More information

TPC-DI. The First Industry Benchmark for Data Integration

TPC-DI. The First Industry Benchmark for Data Integration The First Industry Benchmark for Data Integration Meikel Poess, Tilmann Rabl, Hans-Arno Jacobsen, Brian Caufield VLDB 2014, Hangzhou, China, September 4 Data Integration Data Integration (DI) covers a

More information

Hybrid Storage for Data Warehousing. Colin White, BI Research September 2011 Sponsored by Teradata and NetApp

Hybrid Storage for Data Warehousing. Colin White, BI Research September 2011 Sponsored by Teradata and NetApp Hybrid Storage for Data Warehousing Colin White, BI Research September 2011 Sponsored by Teradata and NetApp HYBRID STORAGE FOR DATA WAREHOUSING Ever since the advent of enterprise data warehousing some

More information

Best Practices. Deploying Optim Performance Manager in large scale environments. IBM Optim Performance Manager Extended Edition V4.1.0.

Best Practices. Deploying Optim Performance Manager in large scale environments. IBM Optim Performance Manager Extended Edition V4.1.0. IBM Optim Performance Manager Extended Edition V4.1.0.1 Best Practices Deploying Optim Performance Manager in large scale environments Ute Baumbach (bmb@de.ibm.com) Optim Performance Manager Development

More information

Research Article ISSN:

Research Article ISSN: Research Article [Srivastava,1(4): Jun., 2012] IJESRT INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY An Optimized algorithm to select the appropriate Schema in Data Warehouses Rahul

More information

Logical Design A logical design is conceptual and abstract. It is not necessary to deal with the physical implementation details at this stage.

Logical Design A logical design is conceptual and abstract. It is not necessary to deal with the physical implementation details at this stage. Logical Design A logical design is conceptual and abstract. It is not necessary to deal with the physical implementation details at this stage. You need to only define the types of information specified

More information

Data Warehouse Design Using Row and Column Data Distribution

Data Warehouse Design Using Row and Column Data Distribution Int'l Conf. Information and Knowledge Engineering IKE'15 55 Data Warehouse Design Using Row and Column Data Distribution Behrooz Seyed-Abbassi and Vivekanand Madesi School of Computing, University of North

More information

SOME TYPES AND USES OF DATA MODELS

SOME TYPES AND USES OF DATA MODELS 3 SOME TYPES AND USES OF DATA MODELS CHAPTER OUTLINE 3.1 Different Types of Data Models 23 3.1.1 Physical Data Model 24 3.1.2 Logical Data Model 24 3.1.3 Conceptual Data Model 25 3.1.4 Canonical Data Model

More information