Best Practices. Physical Database Design. IBM DB2 for Linux, UNIX, and Windows

Size: px
Start display at page:

Download "Best Practices. Physical Database Design. IBM DB2 for Linux, UNIX, and Windows"

Transcription

1 IBM DB2 for Linux, UNIX, and Windows Best Practices Physical Database Design Sam Lightstone Program Director and Senior Technical Staff Member Information Management Software Christopher Tsounis Executive IT Specialist Information Management Technical Sales Agatha Colangelo DB2 Information Development Steven Tsounis IT Specialist Information Management Technical Sales

2 Physical Database Design Page 2 Physical Database Design... 1 Executive summary... 4 Introduction to physical database design... 6 Assumptions about the reader... 7 Goals of physical database design... 8 Datatype selection best practices... 9 Example of virtual views that represent a lookup table for each column... 9 Table normalization and denormalization best practices Normalization Third normal form (3NF) NF, 2NF, and 3NF of database design...13 Star schema and snowflake models...15 Denormalization IBM Layered Data Architecture Index design best practices Clustering indexes Data clustering and multidimensional clustering (MDC) best practices Block indexes for MDC tables Maintaining clustering automatically during INSERT operations Benefits of using MDC MDC storage scenario MDC run time overhead and benefit considerations Determining when to use MDC versus a clustering index Database partitioning (shared-nothing hash partitioning) best practices.34 Balanced Warehouse and Balanced Configuration Units (BCU) Table (range) partitioning best practices UNION All View (UAV) partitioning best practices... 42

3 Physical Database Design Page 3 Migrating UAVs to table partitioning Database partitioning, table partitioning, and MDC in the same database design best practices Roll-in and roll-out of data with table partitioning and MDC best practices Rolling-in large data volumes using table partitioning best practices Materialized query table (MQT) best practices Post-design tools for improving designs for existing databases Explain facility best practices DB2 Design Advisor best practices MDC selection capability of the DB2 Design Advisor...53 Best Practices Conclusion Further reading Contributors Notices Trademarks... 62

4 Physical Database Design Page 4 Executive summary Physical database design is the single most important factor that impacts database performance. Physical database design covers all of the design features that relate to the physical structure of the database such as datatype selection, table normalization and denormalization, indexes, materialized views, data clustering, multidimensional data clustering, table (range) partitioning, and database (hash) partitioning. Good physical database design reduces hardware resource utilization (I/O, CPU, and network) and improves your administrative efficiency. This, in turn, can help you achieve the following potential benefits to your business: Increased performance of applications that use the database, resulting in better response times and higher end-user satisfaction Reduced IT administrative costs, giving you the ability to manage a wider scope of databases and respond quicker to changes in application requirements Reduced IT hardware costs Improved backup and recovery elapsed time Figure 1 shows an illustration of a physical database system. The three heavy dark-boxed vertical rectangles indicate three distinct database instances. All other square or rectangular boxes represent storage blocks on disk. All symbols represent data values within the table (such as geography or month). In this example, a table has been hash-partitioned across three instances called P1, P2, and P3. The table has been range-partitioned by month, allowing data to be easily added and deleted by month. Indirectly, this also helps with queries that have predicates by month. Data within each table has been clustered using multidimensional clustering (MDC), and this serves as a further clustering within each range partition. The rows within the table are also indexed using regular row-based (RID-based) indexes. A materialized query table (MQT) is created on the table, which includes aggregated data (such as average sales by geography), which itself has indexing and MDC.

5 Physical Database Design Page 5 Figure 1 Illustration of a physical database system

6 Physical Database Design Page 6 Introduction to physical database design Database design is performed in three stages: 1. Logical database design: includes gathering of business requirements, and entity relationship modeling. 2. Conversion of the logical design into table definitions (often performed by an application developer): includes pre-deployment design, table definitions, normalization, PK and FK relationships, and basic indexing. 3. Post deployment physical database design (often performed by a database administrator): includes improving performance, reducing I/O, and streamlining administration tasks. Physical database design covers those aspects of database design that impact the actual structure of the database on disk, items 2 1 and 3 in the list above. Although you can perform logical design independently of the platform that the database will eventually use, many physical database attributes depend on the specifics and semantics of the target DBMS. Physical database design includes the following attributes: Datatype selection Table normalization Table denormalization Indexing Clustering MDC Database partitioning Range partitioning UAV partitioning MQTs Memory allocation Database storage topology Database storage object allocation This paper covers all but Database storage topology and Database storage object allocation, which are covered in Best Practices: Database Storage white paper. This 1 This phase is variably referred to in the industry as logical database design or physical database design. It s known as logical database design in the sense that it can be designed independent of the data server or the particular DBMS used. It is also often performed by the same people who perform the early requirements building and entity relationship modeling. Conversely, it is also called physical database design in the sense that it affects the physical structure of the database and its implementation. For the sake of this document we use the latter assumption, and therefore include it as part of physical database design.

7 Physical Database Design Page 7 white paper and others mentioned throughout this paper are available at the DB2 Best Practices website at Physical database design is as old as databases themselves 2. The first relational databases were prototypes (in the early 1970s). As relational database systems advanced, new techniques were introduced to help improve operational efficiency. The most elementary problems of database design are table normalization and index selection, both of which are discussed below. Today, we can achieve I/O reductions by properly partitioning data, distributing data, and improving the indexing of data. All of these innovations (which improve database capabilities, expand the scope of physical database design, and increase the number of design choices) have resulted in the increased complexity of optimizing database structures. Although the 1980s and 1990s were dominated by the introduction of new physical database design capabilities, the years since have been dominated by efforts to simplify the process through automation and best practices. The vast majority of physical database design features and attributes have the primary goal of reducing I/O use at run time. However, to a lesser degree, there are physical design aspects that help improve administrative efficiency and reduce CPU or network use. In addition, in the DB2 partitioned environment, the database design influences the degree of parallel processing, for example, parallel query processing. The best practices presented in this document have been developed with the reality of today s database systems in mind and specifically address the features and facilities available in DB Assumptions about the reader It is assumed that you are familiar with the physical database design features described. Therefore, only a very brief description of each one is provided. The focus of this paper is on the best practices for applying these features. For details on each respective feature, refer to the DB2 product documentation. 2 The relational model for databases was first proposed in 1970 by E.F Codd at IBM. The first relational database systems to be implemented, using SQL and B+ tree, were IBM s System R, in 1976, and Ingres at the University of California, Berkeley. The B+ tree, the most commonly used indexing storage structure for user-designed indexes, was first described in the paper Organization and Maintenance of Large Ordered Indices by Rudolf Bayer and Edward M. McCreight, 1972.

8 Physical Database Design Page 8 Goals of physical database design A high-quality physical database design is one that meets the following goals: Minimizes I/O Balances design features that optimize query performance concurrently with transaction performance and maintenance operations Improves the efficiency of database management, such as roll-in and roll-out of data Improves the performance of administration tasks, such as index creation or backup and recovery processing Minimizes backup and recovery elapsed time

9 Physical Database Design Page 9 Datatype selection best practices When designing a physical database, the selection of appropriate datatypes is an important consideration that should not be overlooked. Often, abbreviated or intuitive codes are used to represent a longer value in columns, or to easily identify what the code represents; for example, an account status column whose codes are OPN, CLS, and INA (representing an account that can be open, closed, or inactive). From a query processing perspective, numeric values can be processed more efficiently than character values, especially when joining values. Therefore, using a numeric datatype can provide a slight benefit. While using numeric datatypes might mean that interpreting the values that are being stored in a column is more difficult, there are appropriate places where the definitions of numeric values can be stored for retrieval by end users, such as: o o Storing the definitions as a domain value in a data modeling tool such as Rational Data Architect, where the values can be published to a larger team using metadata reporting Storing the definition of the values in a table in a database, where the definitions can be joined to the values to provide context, such as text name or description (tables that store values of columns and their descriptions are often referred to as reference tables or lookup tables) Another concern that is often raised is that, for a large databases, this storing of definitions could lead to the proliferation of reference tables. While this is true, if an organization chooses to use a reference table for each column that is used to store a code value, it is possible to consolidate these reference tables into either a single or a few reference tables. From these consolidated reference tables, virtual views can be created to represent the lookup table for each column. Example of virtual views that represent a lookup table for each column In the following diagram, the TCUSTOMER table has two columns that use code values: CUST_TYPE and CUST_MKT_SEG. In this scenario, a reference table is created for each column that uses a code, resulting in two reference tables, TCUST_TYPE_REF and TCUST_MKT_SEG_REF.

10 Physical Database Design Page 10 This approach is not flexible because any time a new column is added that employs the use of a code value, a new reference table must be created. A possible solution is to consolidate the reference table into a single reference table (TREF_MASTER), as shown in the following diagram: In this diagram, two virtual views, VCUST_TYPE_REF and VCUST_MKT_SEG_REF, were created from the TREF_MASTER table to represent the reference tables in the example above. The benefit to this approach is that end users can still use the reference table (without having to write complex SQL) by simply accessing the reference views for each column. In addition, the DBA will only maintain a single table for all of the reference data, and the proliferation of reference tables is limited.

11 Physical Database Design Page 11 To understand how the VCUST_TYPE_REF view was created, here is the SQL: SELECT VALUE as CUST_TYPE, VALUE_NME as CUST_TYPE_NME, VALUE_DESC as CUST_TYPE_DESC FROM REFTB.TREE_MASTER WHERE TBL_SCHEMA = REFTB AND TABLE = REF_MASTER AND COLUMN = CUST_TYPE Use the following best practices when selecting datatypes: Always try to use a numeric datatype over a character datatype, taking the following considerations into account: o o When creating a column that will hold a Boolean value ( YES or NO ), use a decimal (1,0) or similar datatype. Use 0 and 1 as values for the column rather than N or Y. Use integers to represent codes. o If there will be less than 10 code values for a given column, decimal (1,0) datatype is appropriate. If there are more than 9 code values that will be stored in a given column, use smallint. Store the definitions as a domain value in a data modeling tool, such as Rational Data Architect, where the values can be published to a larger team using metadata reporting. Store the definition of the values in a table in a database, where the definitions can be joined to the value to provide context, such as text name or description.

12 Physical Database Design Page 12 Table normalization and denormalization best practices Table normalization is the restructuring of a data model by reducing its relations to their simplest forms. It is a key step in the task of building a logical relational database design. Normalization helps avoid redundancies and inconsistencies in data; it is typically a logical data modeling exercise, whose outcome might be implemented in the physical design. There are a few goals for deploying a normalized design: Eliminate redundant data, for example, storing the same data in more than one table. Enforce valid data dependencies by only storing related data in a table, and dividing relational data into multiple related tables. Maximize the flexibility of the system for future growth in data structures. Normalization The two or three dominant strategies for normalization are: Third normal form (3NF), which is used in online transaction processing (OLTP) and many general-purpose databases, including enterprise data warehouses (also called atomic warehouses). Star schema and snowflake, which are dimensional model forms for normalization, and are used heavily in data warehousing and OLAP. Specify non-enforced RI on FK columns to reduce table access for STAR JOINs without incurring the overhead of RI. Third normal form (3NF) 3NF is a combination of the rules from first normal form and second normal form. The following rules are specific to 3NF: Eliminate repeating groups. Make a separate table for each set of related attributes, and give each table a PK. Eliminate duplicate columns and redundant data in each table. Move subsets of columnar data that apply to multiple rows of a table into separate tables. Create relationships between the tables by using FKs.

13 Physical Database Design Page 13 Eliminate columns not dependent on keys. If attributes do not contribute to a description of a key, move them into a separate table. Remove columns not dependent upon the PK. 1NF, 2NF, and 3NF of database design The following diagrams demonstrate the first, second, and third normal forms of database design: Denormalized model: First normal form (1NF): To make the denormalized model comply with 1NF, the repeating group of data elements, the customer address lines, and the customer names were normalized into separate tables.

14 Physical Database Design Page 14 Second normal form (2NF): For the model to comply with 2NF, it must comply with 1NF and any attributes must be fully dependent on a part of a composite key. Third normal form (3NF): For the model to comply with 3NF, any transitive dependencies must be eliminated. Transitive dependencies occur when a value in a non-key field is determined by the value of another non-key field that is not part of a candidate key.

15 Physical Database Design Page 15 Star schema and snowflake models The star schema and snowflake models have become quite popular for data warehousing BI systems. The basis of star schema is the separation of the facts of a system from its dimensions. Dimensions are defined as attributes of the data, such as the location, or customer name, or part description, and the facts refer to the time-specific events related to the data. For example, a part description does not typically change over time, so it can be designed as a dimension. Conversely the number of parts sold daily varies over time and is therefore a fact. A star schema is called that because it is typically characterized by a large central fact table that holds information about events that vary over time, surrounded (conceptually) by a set of dimension tables holding the meta attributes of items that are referenced within the fact events. A snowflake is basically an extension of a star schema. In a snowflake design, the low cardinality attributes are often moved from a dimension table in a star schema into another dimension table and then a relationship is created between the two dimension tables. Denormalization In contrast to normalization, denormalization is the process of collapsing tables and, therefore possibly increasing the redundancy of data within a database. Denormalization can be useful in reducing the complexity or number of joins, and reducing the complexity of a database by reducing the number of tables. The primary goal of denormalization is to maximize performance of a system and reduce the complexity in administering the system. IBM Layered Data Architecture IBM Layered Data Architecture offers multiple levels of granularity. Each layer provides a different level of detail and data summarization appropriate to user needs, which users (analysts and executives) can access. As data ages, it rolls up through the layers (with more tables and less data per table). This architecture is designed specifically for mixed workloads, query performance, rapid incorporation of new data sources, and deployment of new applications. The layered architecture enables concurrent loading, query, archive and maintenance without compromising query performance. The multiple levels of data granularity are available for multiple types of analytics. Figure 2 shows the 5 layers (or floors) of the IBM Layered Data Architecture.

16 Physical Database Design Page 16 Figure 2 IBM Layered Data Architecture With this model, warehouse administrators can: 1. Use visual modeling tools to optimize the design of multilayered warehouse schemas. 2. Use their preferred extract, transform, and load (ETL) software to bulk-load the staging layer of the warehouse with scale, speed and rich transformations from myriad enterprise data sources. 3. Use SQL Warehousing Tools (SQWs) to maintain analytic structures in the performance and business access layers or to replace hand-coded SQL flows anywhere inside the warehouse. This layered architecture is a powerful paradigm that is too detailed to describe at length here. Refer to Best Practices for Creating Scalable High Quality Data Warehouses with DB2 in the Further reading section for detailed information on this layered architecture. Use the following normalization and denormalization best practices:

17 Physical Database Design Page 17 Use 3NF whenever possible for most OLTP and general-purpose database designs to maintain flexibility in the design of the system. It is a tried-and-true normalization model. For data warehouses and data marts that require very high performance, a star schema or snowflake model is typically optimal for dimensional query processing. However, verify that the star schema or snowflake model conforms to the relationships that you designed in the normalized logical data model. More information about logical modeling for users of Rational Data Architect is available in Best Practices: Data Life Cycle Management white paper. For broad-based data warehousing that is used for several purposes, such as operational data stores, reporting, OLAP and cubing, use the IBM Layered Data Architecture illustrated in Figure 2. Consider denormalizing very narrow tables, ones with a row length of 30 or fewer bytes. Extra tables in a database increase query complexity and complicate administration.

18 Physical Database Design Page 18 Index design best practices Indexes are critical for performance. They are used by a database for the following purposes: Apply predicates to provide rapid look up of the location of data in a database, reducing the number of rows navigated To avoid sorts for ORDER BY and GROUP BY clauses To induce order for joins To provide index-only access, which avoids the cost of accessing data pages As the only way to enforce uniqueness in a relational database However, indexes incur additional hardware resources: They add extra CPU and I/O cost to UPDATE, INSERT, DELETE, and LOAD operations They add to prepare time because they provide more choices for the optimizer They can use a significant amount of disk storage In DB2 database systems, a B+ tree structure is used as the underlying implementation for indexes. All data is stored in the leaf nodes, and the keys are optionally chained in a bidirectional manner to allow both forward and backward index scanning. If DISALLOW REVERSE SCANS is specified then the index cannot be scanned in reverse order. Clustering indexes Clustering indexes (also called special indexes) indicate to the database manager that data in the table object should be clustered in a specific order, on disk, according to the definition of the index. For example, if the clustering index is defined on a date key, then the DB2 database manager will attempt to store, in the table object, rows with similar dates in ascending date sequence. The table in Figure 3 has two row-based indexes defined on it: A clustering index on Region Another index on Year

19 Physical Database Design Page 19 Figure 3. A regular table with a clustering index The value of this clustering is that subsequent queries that have predicates on the clustering attribute need to perform dramatically reduced I/O. For example, a query on sales by date will perform far less I/O if the rows for the selected dates are stored next to each other on disk. However, clustering indexes are merely an indicator to the database, and as new rows are inserted into the database the DB2 kernel attempts to place these rows near rows with the same or similar attributes. If space is unavailable, the incoming or changed row might be redirected to another location that is unclustered (that is, not near the related rows). When an INSERT occurs (or an UPDATE to the clustering keys) the DB2 kernel navigates, top down, scanning the clustering index to determine an appropriate location for the row. Therefore, INSERT, and some UPDATE operations on a table with a clustering index, incurs the overhead of index access that an unclustered table would not. Techniques like append on (APPEND ON option on the CREATE and ALTER TABLE statements) can minimize this overhead by placing all new rows at the end of the table. Therefore, clustering indexes provide approximate clustering, and data often becomes unclustered over time. The REORG utility can be used to reorganize the data rows back into perfect cluster order, although, for online REORGs, this can be a time-consuming and log-intensive operation.

20 Physical Database Design Page 20 To create clustering indexes, simply add the CLUSTER keyword on the create index statement as shown in the following example, where a clustering index MyIndex will be created on column C1 of table T1. There can be only one clustering index per table. CREATE INDEX MyIndex on T1 (C1) CLUSTER Because data clustering can deteriorate over time when using a clustering index, clustering with MDC is preferred as a best practice as it guarantees clustering at all times, and provides the option to clustering along multiple dimensions concurrently. See the discussion on MDC for help on determining which method to use. Utilize the following index design best practices: Index every PK and most FKs in a database. Most joins occur between PKs and FKs, so it is important to build indexes on all PKs and FKs whenever possible. Indexes on FKs also improve the performance of RI checking. Explicitly provide an index for the PK. The DB2 database manager indexes the PK automatically with a system-generated name if one is not specified. The system-generated name for an automatically-generated index is difficult to administer. Columns frequently referenced in WHERE clauses are good candidates for an index. An exception to this rule is when the predicate provides minimal filtering. An example is an inequality such as WHERE cost <> 4. Indexes are seldom useful for inequalities because of the limited filtering provided. Specify indexes on columns used for equality and range queries. Create an index for each set of fact table columns that join to a dimension. These columns do not have to be part of an explicit FK. Creating the index allows STAR JOIN access to plans that use dynamic bitmap index ANDing. Consider creating indexes on combinations of fact-table columns. For example, if PRODKEY and STOREKEY join to the product and store the dimension respectively, consider creating an index on (PRODKEY, STOREKEY). This facilitates a hub or cartesian STAR JOIN access plan. Use the db2pd command, which indicates the number of times that indexes were used in order from highest to lowest. This can be helpful in detecting which indexes are commonly used. For example: db2pd -db MY_DATABASE -tcbstats index The indexes are referenced using the IID, which can be linked with SYSIBM.SYSINDEXES's IID for the index. At the end of the output (shown below

21 Physical Database Design Page 21 in two sections) is a list of index statistics. Scans indicates read access on each index, while the other indicators in the output provide insight on write and update activity to the index. Left side of report: Right side of report: Use the DB2 Design Advisor to indicate which indexes are never accessed for a specified workload and can therefore be dropped. Add indexes only when absolutely necessary. Remember that indexes significantly impact INSERT, UPDATE, and DELETE performance, and they also require storage. To reduce the need for frequent reorganization, when using a clustering index specify an appropriate PCTFREE at index creation time to leave a percentage of free space on each index leaf page as it is created. During future activity, rows can be inserted into the index with less likelihood of causing index page splits. Page splits cause index pages not to be contiguous or sequential, which in turn results in decreased efficiency of index page prefetching. Note: The PCTFREE specified when you create the relational index is retained when the index is reorganized. Dropping and recreating, or reorganizing, the relational index also creates a new set of pages that are roughly contiguous and sequential and improves index page prefetch. Although more costly in time and resources, the REORG TABLE utility also ensures clustering of the data pages. Clustering has greater benefit for index scans that access a significant number of data pages. Examine queries with range or with ORDER BY clauses to identify clustering dimensions. Clustering indexes incur additional overhead for INSERT and some UPDATE operations. If your workload performs a large amount of updates, you will need to weigh the benefits of clustering for queries against the additional cost to INSERTS and UPDATES. In many cases, the benefit far outweighs the cost, but not always.

22 Physical Database Design Page 22 Avoid or remove redundant indexes. An example of a redundant index is one that contains only an account number column when there is another index that contains the same account number column as its first column. Indexes that use the same or similar columns make query optimization more complicated, use storage, seriously impact INSERT, UPDATE, and DELETE performance, and often have very marginal benefits. Although the DB2 database system provides dynamic bitmap indexing, index ANDing, and index ORing, it is good practice to specify composite indexes, referred to as multiple column indexes, if these columns are frequently specified in WHERE clauses. Choose the leading columns of a composite index to facilitate matching index scans. The leading columns should reflect columns frequently used in WHERE clauses. The DB2 database system navigates only top down through a B-tree index for the leading columns used in a WHERE clause, referred to as a matching index scan. If the leading column of an index is not in a WHERE clause, the optimizer might still use the index, but the optimizer is forced to use a nonmatching index scan across the entire index.

23 Physical Database Design Page 23 Data clustering and multidimensional clustering (MDC) best practices MDC is a technique for clustering data along more than one dimension at the same time. However, you can also use MDC for single-dimensional clustering, just as you can use a clustering index. An advantage of an MDC table is that it is designed to always be clustered. A reorganization is never required to re-establish a high-cluster ratio. To understand MDC, you must first understand some basic terminology: Cells are the portion of the table containing data having a unique set of dimension values the intersection formed by taking a slice from each dimension. Blocks are the unit of storage equal to an extent size (one or more pages) that is used to store a cell. Your extent size specification determines the size of the block (or cell). Block indexes for MDC tables Unlike traditional indexes created by the CREATE INDEX syntax, which index each row in a table, MDC indexes the rows in the table by block, called block indexes. MDC block indexes are typically 1/1000th of the size of row-based indexes, and provide not only huge savings in storage for the index, but massive efficiencies on all block index operations (such as index scan, index ANDing, and index ORing). INSERT and UPDATE operations are also enhanced because the block index is only updated if a new cell is created. As shown in Figure 4, block indexes provide a significant reduction in disk usage and significantly faster data access:

24 Physical Database Design Page 24 Figure 4. How row indexes differ from block indexes The MDC table shown in Figure 5 is physically organized such that rows having the same Region and Year values are grouped together into separate blocks, or extents. MDC block indexes are created for each dimension as well as the composite dimension. For example, if the dimensions for a table are Region,Year then a block index is built for Region, for Year, and for the composite dimension Region,Year.

25 Physical Database Design Page 25 Figure 5. A multidimensional clustering table (MQT) An MDC table defined with even just a single dimension can benefit from these MDC attributes, and can be a viable alternative to a regular table with a clustering index. This decision should be based on many factors, including the queries that make up the workload, and the nature and distribution of the data in the table. A high cardinality column is not a good choice for a single-dimension MDC because you will get a cell for each unique value. Maintaining clustering automatically during INSERT operations Automatic maintenance of data clustering in MDC tables is ensured using composite block indexes 3. These indexes are used to dynamically manage and maintain the physical clustering of data along the dimensions of the table over the course of INSERT operations. When an insert occurs, the composite block index is probed for the logical cell corresponding to the dimension values of the row to be inserted. The block index is not updated unless a new cell is created. 3 A composite block index is automatically created and contains all columns across all dimensions. It is used to maintain the clustering of data over insert and update activity, and might also be selected by the optimizer to efficiently access data that satisfies values from a subset, or from all, of the column dimensions.

26 Physical Database Design Page 26 As shown in Figure 6, if the key of the logical cell is found in the index, its list of block ID (BIDs) gives the complete list of blocks in the table having the dimension values of the local cell. This limits the number of extents of the table to search for space to insert the row. Figure 6. Composite block index on YearAndMonth, Region Because clustering is automatically maintained, reorganization of an MDC table is never needed to re-cluster data. Also, MDC can reuse empty cells that result from the mass deletion of rows without a REORG. However, reorganization can still be used in rare situations to reclaim space. For example, if cells have many sparse blocks where data could fit on fewer blocks, or if the table has many pointer-overflow pairs, a reorganization of the table would compact rows belonging to each logical cell into the minimum number of blocks needed, as well as remove pointer-overflow pairs. Benefits of using MDC The value of MDC is profound. It improves complex query performance by 10 times in some cases and you can use it for roll-in and roll-out of data. Other benefits include the following ones: MDCs are multi-dimensional. For example, data can be perfectly clustered along DATE and LOCATION dimensions; cells and ranges are created automatically as new data arrives. MDCs can be used in conjunction with normal RID-based indexes, range partitioning, and MQTs. Index ANDing or ORing of block-based and RID-based indexes is a possible access path that can be chosen by the DB2 Optimizer. MDCs are used with intra-query parallelism, DPF (shared nothing) parallelism, and LOAD, BACKUP, and REORG operations.

27 Physical Database Design Page 27 MDC dimensions, unlike range-partitioned tables, are dynamic; new cells get created within the table automatically as unique new data representing new cells arrives in the table either through SQL operations (including JDBC, CLI, and so forth), or through utility operations such as LOAD and IMPORT. Empty cells can also be reused during these operations. MDCs maintain clustering, and, as such, do not need REORGs to maintain cluster ratios. The following example shows how to define an MDC table: CREATE TABLE T1 (c1 DATE, c2 INT, c3 INT, c4 DOUBLE, c5 INT generated always as (INT(C1)/100) ) ORGANIZE BY DIMENSIONS (c5, c3) The ORGANIZE BY clause defines the clustering dimensions. The table is clustered by C5 and C3 at the same time. C1 is coarsified 4 to C5, which contains fewer distinct values (days are reduced to months). NOTE: The coarsified generated column(s) are used in the MDC block indexes to perform cell-level elimination of data. Calculated columns are fully supported by MDC and the DB2 Optimizer. The key design challenge of MDC is the careful selection of the clustering dimensions. If you choose clustering dimensions that result in too many cells, storage costs can increase substantially. The reason for this is important to understand. In an MDC table, every cell is allocated as many storage blocks on disk as required. Storage blocks are by design equal to the extent size of the table space that holds a table. The number of storage blocks is 0 if a cell has no data. However, in a typical table a cell stores several rows, resulting in one or more storage blocks being allocated to the cell. For every cell that has data, there is a chain of blocks, which typically contains a partially filled block. Therefore, there could be wasted storage for each cell (not each block), proportional to the size of the storage block. New blocks are created only when the previous block is full (or nearly full). If rows are deleted and the cell is empty, the database manager can reuse the space and avoid the need for a reorganization (for space reclamation). Storage blocks are by design equal to the extent size of the table space that holds a table. If the number of cells in the table is very large, the storage waste is large. If MDC is poor and results in a huge number of cells, the table storage requirement expands dramatically, and MDC can also be a performance detriment. However, when designed 4 The term coarsification refers to a mathematics expression to reduce the cardinality (the number of distinct values) of a clustering dimension. A common example of a coarsification is the date where coarsification could be by date, week of the date, month of the date, or quarter of the year.

28 Physical Database Design Page 28 well, MDC tables are only slightly larger than non-mdc tables, and offer profound benefits for clustering and roll-in and roll-out of data (as discussed in the paragraphs that follow). The key is to use low-cardinality columns for the dimensions of an MDC. Figure 7 shows storage block and cell allocation. As shown, each cell contains a set of storage blocks. Most of the blocks are filled with data, but for each cell there is a block at the end of the chain which is partially filled to a lesser or greater degree. Figure 7 MDC storage by cell If you have sample or actual data, using SQL, you can measure the number of expected MDC cells for any given potential MDC design, as follows: SELECT COUNT(*) FROM (SELECT DISTINCT COL1, COL2, COL3 FROM MY_FAV_TABLE) AS NUM_DISTINCT; COL1, COL2, and COL3 represent the MDC dimensions for a 3-dimensional MDC table. The resulting number multiplied by the extent size of the table will give you an upper bound on the extent growth (not size) of the table when converted to MDC. As described in the previous section, another key value of MDC is that the DB2 database manager automatically creates indexes for MDC tables over the MDC dimensions of the table. These special indexes (call block indexes) index data by block instead of by row. This

29 Physical Database Design Page 29 results in associated run time performance benefits for queries and minimal overhead for INSERT, UPDATE and DELETE operations. MDC provides features that facilitate the roll-in and roll-out of data: o o o o MDC has much less block index I/O during the roll-in process because the block index is only updated once when the block is full (not for every row inserted). Inserts are also faster because MDC reuses existing empty blocks without the need for index page splitting. Locking is reduced for inserts because they occur at a block level rather than at a row level. There is no need to REORG data after roll-in and roll-out. MDC storage scenario You want to create an MDC for a Transaction Fact on Date, Product Name, and Region. Here are some variables to consider for the MDC creation: There are 365 days in a year There are 100,000 products for company XYZ There are 10 regions for company XYZ Initial MDC creation If the MQT was created strictly on the Date, Product and Region column, there would be 1,000,000 new cells created daily (1 x 100,000 x 10) and 365 million cells per year (previous x 365). In regions where transactions are low, there will be a lot of sparse pages, and even empty pages. This could lead to a lot of unnecessary space being used by allocating so many cells (pages) to contain this block of data. This is not good. Improving the creation of the MDC Use functions to coarsify and limit MDC cardinality. For example: If you use the month function on the Date, you would have 12 results per year If you substring the Product name to pick the first character of the Product name, you could have 26 potential results Leave Region as is with 10 results Using the recommendation in this scenario, every year, the MDC would have 12*26*10 = 3210 cells or about 8-9 cells per day. This would eliminate the scarcity of data on many of

30 Physical Database Design Page 30 the pages, and provide a reasonable cardinality for the MDC to be effective in providing a performance benefit. MDC run time overhead and benefit considerations MDC is designed to provide large performance benefits for queries and improvement for many DELETE scenarios. Even so, MDC tables do incur overhead over non-clustered tables, while offering significant performance benefits over tables that are clustered using a clustering index. Consider first the overhead of MDC versus an unclustered table: INSERT operations on a non-clustered table access each index to add a reference to the inserted row. In contrast, INSERT on an MDC table requires an initial read to the MDC composite block index in order to determine to which cell and block the row belongs, followed (after the insert on the table) by access to each index in order to insert a reference to the row. (Clustering indexes incur a similar overhead). If the MDC table includes a generated column to coarsify one of the dimensions, every INSERT will incur a small processing overhead to compute the generated value for that column as all generated columns in DB2 are fully materialized, that is, calculated and stored within the row. However, when compared to a table clustered with the use of a clustering index, MDC offers significant performance advantages: Index maintenance is dramatically reduced during INSERTs compared to the processing required for a clustering index, as the DB2 database manager only updates the block index when the first key is added to a block unlike a RIDindex where every single inserted row to the table requires an update to all indexes. That is, if there are 1000 rows per block, the rate of index updates is 1/1000th what it would be for a RID index. The index update is cheaper, because the index is smaller and therefore has fewer levels in the tree. Fewer levels in the B+-tree means less processing to determine the target leaf page for the index entry. In both cases, whether clustered by a clustering index or by MDC, the DB2 database manager will access the index (clustering index of the block index) during INSERT to determine the target location of the row. Again the index is much smaller, and the height of the tree usually shorter resulting in a faster search. Determining when to use MDC versus a clustering index MDC provides huge value over a clustering index because the clustering is guaranteed and automatic. In general you can achieve cluster ratios with MDC anywhere between 93%-100% depending on the coarsification needed. In contrast, clustering indexes can cluster data close to 100% initially, but becomes declustered over time, and might require time-consuming REORG to recluster the data. In general, use MDC to create and maintain data clustering in your database unless:

31 Physical Database Design Page 31 MDC would require coarsification and you are unable to add a generated column to your table. The MDC version of the table results in table growth you are unable or unwilling to incur. Well-designed MDC tables are typically 2-15% larger than non-mdc tables. You find that MDC clustering will give you a lower cluster ratio (for example, 93%) due to coarsification and you are willing to incur the periodic REORG processing in order to get the improved clustering that can be achieved with a clustering index. Use the following MDC design best practices: Start your selection for MDC candidates by looking for columns that are used as predicates for equality, inequality, range, and sorting. To improve roll-in of data, your dimension should match your roll-in range. Strive for density! Remember, an extent is allocated for every existing cell regardless of the number of rows in that cell. To leverage MDC with optimal space utilization, strive for densely filled blocks. Constrain the number of cells in an MDC design. Keep the number of cells reasonably low to limit how much additional storage the table will require when converted to MDC form. 5% to 10% growth for any single table is a reasonable goal. (See the discussion on MDC cells in the Benefits of using MDC section.) There are exceptions, where even double the amount of growth is useful, but they are rare. Note: Block indexes are usually so small as a percentage of the corresponding table size that, in most cases, you can ignore the storage required for them. Coarsify some dimensions to improve data density. Use generated columns to create coarsifications of a table column that have much lower column cardinality. For example, create a column on the month-of-year part of a date column, or use (INT(colname))/100 to convert a DATE column with the format Y-M-D to Y-M. For example, CREATE TABLE Sales (SALES_DATE DATE, REGION CHAR(12), PRODUCT CHAR(30), MONTH GENERATED ALWAYS AS ((INTEGER(DATE)/100) ORGANIZE BY (MONTH, REGION, PRODUCT) For the query:

32 Physical Database Design Page 32 select * from sales where sales_date> 2006/03/03 and date< 2007/01/01.. The compiler generates the additional predicates: month>= and month<= To reduce wasted space, specify a small table space extent size, which reduces your MDC Block Size. Don t select too many dimensions. It is very rare to find useful designs that have more than three MDC dimensions without unreasonable storage requirements. The more dimensions you have, the more the cardinality of cells will increase exponentially. This makes it extremely hard to constrain the expansion of the MDC table to the design goal of approximately 10% (versus a non-mdc table). If the table expands unreasonably (for example, more than two times its non-mdc size) not only will you require more storage, but the gains of clustering might be lost due to the increase in doing I/O on partially filled blocks. A simple example: Consider a table with three dimensions worth clustering on, each with 10,000 unique values. If these columns have no correlation between them, then clustering on all three dimensions without coarsification would result in 10,000 x 10,000 x 10,000 cells, with a partially filled block per cell. If each block is 1MB, the overhead from this careless design would be around 500,000 TB! Consider single-dimensional MDC. Single-dimensional MDC can still provide massive benefits compared to those of traditional single dimensional clustered indexes. The reasons are that: o o o o o Clustering is guaranteed. MDC tables are indexed by block and not by row, resulting in indexes that are roughly 1/1000 the size of traditional row-based indexes. DELETE performance using MDC roll-out is improved. RID indexes on MDC are updated asynchronously with DB MDC facilitates roll-in of data. Use single-dimensional MDC (with coarsification if needed) to enforce clustering instead of using a clustering index. Clustering indexes cluster data on a best effort basis (there are no guarantees of how well they cluster), and over time, they tend to become unclustered. In contrast MDC guarantees clustering, avoiding the need to reorganize data. (See the coarsification example in the MDC Scenario section.)

33 Physical Database Design Page 33 Be prepared to tinker (on a test database). It might take trial and error to find an MDC design that works really well. Use the DB2 Design Advisor with the m C option (C for clustering search). You can also use the db2mdcsizer utility, which determines space requirements and simplifies administration of MDC tables. This utility is available on AlphaWorks for certain versions of DB2 products. MDC modifications will not impact your application programs. Use the MDC selection capability of the DB2 Design Advisor with a representative workload to find suitable MDC dimensions for an existing table.

34 Physical Database Design Page 34 Database partitioning (shared-nothing hash partitioning) best practices Database partitioning is a technique for horizontally distributing rows in the database across many database instances that work together to form a single large database server. These instances can be located within a single server, across several physical machines, or a combination. In DB2 products, this is called the Database Partitioning Facility (DPF). Database partitioning allows the DB2 database manager to scale to hundreds of instances that participate in the larger database system. The scalability of this design can approach near linear scaleout for many complex query workloads. As such, database partitioning has become extremely popular for data warehousing and BI workloads due to its near linear scaleout characteristics and its ability to scale to hundreds of terabytes of data and hundreds of CPUs. The architecture is less popular for OLTP processing due to the interinstance communication incurred on each transaction, which though small, can still be very significant for short running transactions typically found in OLTP workloads. DPF might be used for OLTP applications that require a cluster of computers for throughput. Shared-nothing hash partitioning hashes rows to logical data partitions. The primary design goal of hash distribution is to ensure the even distribution of data across all logical nodes (as range partitioning tends to skew data). These partitions might reside within a single server or be distributed across a set of physical machines, as shown in Figure 9: Figure 9 Table hash-partitioning

35 Physical Database Design Page 35 The scalability of shared-nothing databases has proven to be nearly linear for a wide range of complex query workloads. Also, the modular nature of the design lends itself to linear scaleout as storage pressures, workload pressures, or both grow. As a result, shared-nothing architectures have dominated data warehousing for the past decade. Database partitioning is implemented without impact on existing application code, and is completely transparent. Partitioning strategies can be modified online with the redistribution utility without affecting application code. The primary design choice is determining which columns to use to hash partition each table that comprises the database-partitioning key. The goals are twofold: 1. Distribute data evenly across database partitions. This requires choosing partitioning columns that have a high cardinality of values to ensure an even distribution of rows across the logical partitions. 2. Minimize shipping of data across database partitions during join processing. Collocation of rows being joined will occur (avoiding movement) if the partitioning key is included in the WHERE clause. Another central problem in designing shared-nothing data warehouses is determining the best combinations of memory, CPUs, buses, storage capacity, storage bandwidth, and networks. How much or how many do you need of each of these? To help solve this problem, IBM provides the IBM Balanced Warehouse, which is based on DB2 database system s shared nothing architecture. It was developed through IBM best practices used for successful client implementations. Balanced Warehouse and Balanced Configuration Units (BCU) The Balanced Warehouse combines building blocks known as Balanced Configuration Units (BCU). These building blocks are preconfigured, pre-tested, and tuned for performance to provide an ideal volume and ratio of system resources. The BCU combines the best practices for database configuration and hardware components to greatly simplify warehouse setup and deployment. Scores of best practices for resource ratios and database configuration have been incorporated into the Balanced Warehouse. Figure 10 shows the various Balanced Warehouse offerings for 2007 and You can see that the Balanced Warehouse currently offers three classes of offerings, C, D and E. These three classes offer increasing power and scalability to the solution. The C class is an entry level offering intended for SMB markets, or systems integrators that can be contained in a single server. D and E class offerings scale out to much larger configurations using DB2 database partitioning capabilities. 5 For an up-to-date version of the Balanced Warehouse offerings refer to the Balanced Warehouse web pages online at:

36 Physical Database Design Page 36 Figure 10 Balanced Warehouse offerings 6, Use the following database partitioning best practices: Select partitioning keys that have a large number of values (high cardinality) to ensure even distribution of rows across partitions. Unique keys are good candidates. If you are having a difficult time finding a key that can distribute data evenly across partitions, you might want to consider using a function on a column. Avoid choosing a partitioning key with a column that is updated frequently; this could incur additional overhead on the update to repartition the row to another partition. If possible, as your partitioning key, try to choose a column that has a simple datatype, such as fixed-length character or integer. The hashing performance can benefit from doing this versus selecting a complex datatype. To increase collocation, consider using the join column as the partitioning key for a table that is frequently joined (provided that the columns have high cardinality to satisfy the even distribution of rows). Select the minimum number of columns 6 Prices reflected in the Estimated Cost in this table are current as of May 2008, exclude applicable taxes, and are subject to change by IBM without notice.

37 Physical Database Design Page 37 required to achieve high cardinality and even distribution of rows in the partitioning key. Reducing the number of columns in the partitioning key improves the likelihood that the column will be in the join predicates (improving the odds of collocation). Ensure that unique indexes are a superset of the partitioning key. Use replicated MQTs for small tables (tables that are less than 3% of the total database size, or less than 5% of the largest table size are a reasonable rule of thumb) or infrequently updated tables in order to: o o o Improve collocation and reduce movement over the network Assist in the collocation of joins Improve performance of frequently executed joins in a partitioned database environment by allowing the database to manage precomputed values of the table data. For example: CREATE TABLE R_EMPLOYEE AS ( SELECT EMPNO, FIRSTNME, MIDINIT, LASTNAME, WORKDEPT FROM EMPLOYEE ) DATA INITIALLY DEFERRED REFRESH IMMEDIATE IN REGIONTABLESPACE REPLICATED; To update the content of the replicated MQT, run the following statement: REFRESH TABLE R_EMPLOYEE; Note: After using the REFRESH statement, you should run RUNSTATS on the replicated table as you would on any other table. Collocate the largest dimension-table s key as the partition key for the fact table, considering the number of distinct values and skew within the corresponding fact-table column. Replicate small dimensions (less volatile) tables, where small is relative and depends on the installation s available storage. Replicate a horizontal or vertical subset of dimensions that don t match the partitioning key, as follows: o Partition any remaining dimensions on their PK.

38 Physical Database Design Page 38 o After creating a replicated table to improve collocation, remember to collect table and index statistics (or use the DB2 automatic statistics collection feature). Remember to implement the same indexes on the replicated MQTs as you have defined on the base table(s). o Define replicated MQTs as REFRESH IMMEDIATE if they are small and rarely updated. Try to limit the number of parallel ETL jobs executing when REFRESH IMMEDIATE is specified. A deferred refresh strategy provides less overhead for updates of the base table. Distribute large tables on several partitions. Small tables with less than one million rows should be located on one database partition only.

39 Physical Database Design Page 39 Table (range) partitioning best practices Table partitioning should be used predominantly to facilitate improved roll-in and roll-out of data. It enables an administrator to add a large range of data (such as a new month of data) to a table, en-masse, and perhaps more importantly it allows an administrator to remove data from a table, or from the database, en-masse, almost in an instant (without data movement). DB2 database systems' unique asynchronous index-cleanup technology means that even while using global indexes that index data across several range partitions, a range can be detached from the table, and the index keys associated with that range become immediately invisible to incoming queries. The keys are subsequently deleted quietly in a background process with negligible impact to the executing database workload. Table partitioning also offers side benefits of increased query performance through an internal process called partition elimination, which, in many cases, enables the query compiler to select improved execution plans. This is a secondary benefit of table partitioning. Furthermore, table partitioning enables the division of a table into several ranges that are stored in one or more physical objects within a database logical partition. The goal of table partitioning is to logically organize data to facilitate optimal data access and the roll-out of data. The division of the table into ranges is transparent to the application, and can therefore be designed at any point in the application development cycle. See Best Practices: Data Life Cycle Management white paper for more details on table partitioning. Other attributes and features of table partitioning include the following ones: Each range can be in a different table space Ranges can be scanned independently Performance for certain BI-style queries is improved through partition elimination New ALTER ATTACH/DETACH statements for easier roll-in and roll-out of data: o o New ATTACH operation for roll-in New DETACH operation for roll-out SET INTEGRITY is now online (allowing read/write access to older data) For new ranges, ADD plus LOAD operations can be used over ATTACH plus SET INTEGRITY operations

40 Physical Database Design Page 40 The following example shows how to define a partitioned table: CREATE TABLE SALES(SALE_DATE DATE, CUSTOMER INT, ) PARTITION BY RANGE(SALE_DATE) (STARTING 1/1/2006 ENDING 3/31/2008, STARTING 4/1/2006 ENDING 6/30/2008, STARTING 7/1/2006 ENDING 9/30/2008, STARTING 10/1/2006 ENDING 12/31/2012 ); This statement results in the creation of four table objects, each one of which stores a range of data, as shown in Figure 8: Figure 8 Table partitioning by date range Use the following table partitioning best practices: Use table (range) partitioning to rapidly delete (roll-out) ranges of data. Match range-partitioning periods to roll-in and roll-out ranges. For example, if you need to roll-in and roll-out data by month, range partitioning by month is a reasonable strategy. Partition on DATE columns. Roll-in and roll-out scenarios are almost always based on dates. Improved query execution plan (QEP) selection, using partition elimination 7, and a significant set of those opportunities are also based on date predicates. Limit the number of ranges. Remember that each range is a table object with a minimum of two extents. Avoid designs with an excessive number of partitions. A rule of thumb is at least 50MB of data in each range (several gigabytes of data per range is best). Make the size of your ranges match the size you typically rollout. 7 Partition elimination improves your SQL workload performance. Partition Elimination is a strategy used internally by the query compiler. The query compiler automatically determines if it can exploit the table partitioning for this purpose. Typically dates can satisfy the roll-out requirement and often provide partition elimination benefits to many queries.

41 Physical Database Design Page 41 When adding new ranges, ADD table partition with a LOAD operation is often faster than the ATTACH of a partition with subsequent SET INTEGRITY operations. o The LOAD utility has an option to maintain indexes incrementally, and to write only a single log row for the event, regardless of how many rows are inserted into the table. Although the LOAD utility supports concurrent read access to older data, queries need to be drained. Consider separating table partitions in separate table spaces to facilitate backup and recovery. Table partitions (ranges) can be backed up and restored by table space. Place global indexes, which can be large, in their own individual table space. Placing all the global indexes in a single table space can impact the elapsed time of the BACKUP utility (because the index table space can become much larger than the data table spaces). Ensure and maintain the clustering of data by making the range-partitioning key the leading column in a clustered index (no MDC). Data will not be clustered properly if your clustered index is not prefixed by your partition key. For example, PARTITION BY RANGE (Month, Region) CREATE INDEX (Month, Region, Department) CLUSTER Use page-level sampling to reduce RUNSTATS time. A sampling rate of 10% to 20% provides good quality statistics with a major performance improvement. For details, see Best Practices: Writing and Tuning Queries for Optimal Performance white paper. Place table partitions in different table spaces; this allows you to backup new ranges as data is rolled in to the new range, without having to backup the other partitions. This greatly improves the speed and reduces the size of backup images.

42 Physical Database Design Page 42 UNION All View (UAV) partitioning best practices Prior to the availability of DB2 9 table partitioning, applications often had a requirement to partition data by ranges. By creating a table for each range with the appropriate constraints, DBAs were able to provide a single system view by the creating a UAV for all the tables. For example: Create Table TestQ1 (Col 1 date) Alter Table TestQ1 add constraint q1_chk (month(dt) in (1,2,3) Repeat the table create/constraint for each quarter: Create View Test as Select * from TestQ1 Union Select * from TestQ2 Table partitioning provides a single view of the table to the compiler and optimizer. This allows more aggressive predicate push-down to the different ranges than UAV, and a more consistent model for partitioning data. Table partitioning is the recommended method for implementing range-based partitioning for most application requirements. NOTE: UAVs are not a parallel processing method for dividing work across CPUs. The DB2 Database Partitioning Facility (DPF) should be used for that purpose (see the discussion on Database partitioning ). As with Table partitioning, you can use UAV to store ranges of data in distinct table spaces, providing granularity for BACKUP operations (see the discussion on Table partitioning ). The advantages of the UAV design predominantly revolve around the ability to operate on some ranges independent of others, or to design some ranges with unique attributes. Conversely table partitioning provides a homogenous view of a range-partitioned table. Although table partitioning is generally preferred, there are advantages to UAVs: For replication: Historical tables in UAVs can be compressed. (Use UAVs when replication is needed on certain ranges of data, while other ranges that do not require replication can benefit from compression.) UAVs are utilized to reduce the granularity of utility operations (such as REORG and RUNSTATS). Utilities can operate on a given table containing a range. NOTE: REORG is commonly the most important of these. This is valuable when ranges are changing frequently requiring reclustering or recompression of a range. UAVs allow this operation to be performed on the subset of ranges that

43 Physical Database Design Page 43 require it. DB2 9.5 has automatic dictionary rebuild for table partitioning, alleviating the need to REORG a new range for compression. Heavily used ranges can be isolated into separate tables containing additional indexes or MQTs to optimize data access. UAVs provide end users with a single view of federated data (stored in multiple IBM or non-ibm databases). A UAV can provide a single view of data across several databases. Table partitioning provides the following advantages over the UAV partitioning approach: Preparation time is faster (one table instead of multiple tables in a view) Simpler management (one table, not multiple tables) Less catalog locking for roll-in and roll-out of ranges Unique indexes across all ranges supported Better handling of complex queries Simpler EXPLAINs (using the explain facility) Migrating UAVs to table partitioning The migration of UAVs to table partitioning can be achieved without data movement by following this procedure: 1. Create a partition table with a single dummy partition and with a range that does not interfere with existing ranges. This requires the same page size and extent size. 2. ALTER ATTACH all tables in the UAV. 3. Drop the dummy partition. 4. Run SET INTEGRITY after all TABLE ATTACH commands. To speed up set integrity: a) Drop all indexes. b) Recreate indexes after SET INTEGRITY completes. Use the following UAV partitioning best practices: Use database partitioning to achieve scalability, rather than UAVs. As with table partitioning, use UAVs in order to place ranges of data in distinct table spaces, improving BACKUP granularity.

44 Physical Database Design Page 44 Recommendation: Migrate UAVs to table partitioning, taking the following considerations into account: Newly developed applications with range-partition requirements should be implemented with table partitioning rather than with UAVs, unless you have strong requirements for one or more of the UAV advantages listed above. UNION ALL applications being migrated to table partitions utilizing deep compression should be implemented with DB2 9.5 in order to benefit from automatic dictionary compression.

45 Physical Database Design Page 45 Database partitioning, table partitioning, and MDC in the same database design best practices Database partitioning, table partitioning, and MDC can be implemented simultaneously in the same design. Database partitioning can be implemented to help achieve scalability and to ensure the even distribution of data across logical partitions. Table Partitioning can be implemented to facilitate Query Partition Elimination and roll-out of data. MDC can be implemented to improve Query Performance and facilitate the rollin of data. This is a best practice approach for deploying large scale applications. For example: CREATE TABLE TestTable (A INT, B INT, C INT, D INT ) IN Tablespace A, Tablespace B, Tablespace C INDEX IN Tablespace B DISTRIBUTE BY HASH (A) PARTITION BY RANGE (B) (STARTING FROM (100) ENDING (300) EVERY (100)) ORGANIZE BY DIMENSIONS (C,D) See Best Practices: Data Life Cycle Management white paper for more details. To deploy large scale applications, implement database partitioning, table partitioning, and MDC in the same database design.

46 Physical Database Design Page 46 Roll-in and roll-out of data with table partitioning and MDC best practices Design your partitioning strategy to use table partitioning for your roll-out strategy and to use MDC on a single dimension for your roll-in strategy. For example, if you roll-in daily and roll-out monthly, specify an MDC on day and a Table Partition Key for month (calculated values are supported). This approach reduces the number of table partitions and eases the DBA administrative tasks. It takes advantage of the roll-in features of MDC: reduced index I/O with block indexes and reduced logging. See Best Practices: Data Life Cycle Management white paper for more details. Use table partitioning for roll-out, and MDC on a single dimension for roll-in.

47 Physical Database Design Page 47 Rolling-in large data volumes using table partitioning best practices Applications that need to roll-in very large data volumes can speed up the table attachment process by ADDing rather than ATTACHing table partitions, which avoids the need to execute SET INTEGRITY. There is an alternative to ATTACHing a table partition: you also have the ability to ALTER ADD an empty table to a table partition. After the empty table has been added, you can populate the table using the LOAD utility (with read access to older data) or using inserts (logged). LOAD will help provide superior performance, and can load either from external files or from a query definition using the LOAD from cursor capability. For applications utilizing Deep Compression, DB2 9.5 facilitates this technique for rolling-in data because it provides Automatic Dictionary Compression, avoiding the need to REORG in order to compress data. See Best Practices: Data Life Cycle Management white paper for more details. Use the following roll-in and roll-out best practices: Use table partitioning to roll-out large volumes of data. ALTER ADD an empty table to a table partition and populate it using the LOAD utility when using table partitioning for roll-in of data.

48 Physical Database Design Page 48 Materialized query table (MQT) best practices An MQT table is a table whose definition is based on the result of a query. The MQT contains pre-computed results. MQTs are a powerful way to improve response times for complex queries, especially queries that might require some of the following types of data or operations: Aggregated data over one or more dimensions Joins and aggregated data between tables in a group Data from a commonly accessed subset of data that is, from a hot horizontal or vertical database partition Repartitioned data from a table, or part of a table, in a partitioned database environment Replicated MQTs can reduce network traffic for non-partitioned tables in a DPF environment In addition to speeding up query performance, MQTs can be used on nicknames of federated data sources to maintain frequently accessed data locally. MQTs can be maintained with SQL or Q Replication (the system-maintained MQT option for Federated Nicknames is not supported). MQTs are completely transparent to applications. Knowledge of MQTs is integrated into the SQL and XQuery compiler, which determines whether an MQT should be used to answer all or part of a query. As a result, you can create and drop MQTs, without making application code changes, much like you can create and drop indexes without making application code changes. Figure 11 summarizes the characteristics of MQTs according to their refresh type. In the table, Optimization indicates that the DB2 database manager will exploit the deferred MQT where possible, when it processes a query, whereas, No optimization indicates that the MQT will not be looked at, since it could be arbitrarily stale; that is, the database manager does not know when the last refresh occurred against the MQT. Note that MQTs can decrease INSERT performance of the base table. To assist in problem determination, the DB2 9 explain facility indicates why an MQT was not chosen for an access path.

49 Physical Database Design Page 49 Figure 11 Summary of MQT characteristics by refresh type Use the following MQT design best practices: Create an MQT by using the same or higher isolation level that is used by the queries for which you intend to use the MQT. The isolation levels, in order of descending restrictiveness, are RR, RS, CS, and UR. Focus on frequently-used queries that use a lot of resources. These queries provide the greatest opportunities for performance gains through MQTs. Set a limit on the number of MQTs that you are willing to maintain. There are two reasons for this: o o Each MQT uses storage space on disk and additional UPDATE overhead. Each MQT adds complexity to the search for the optimal QEP, increasing query compilation time.

50 Physical Database Design Page 50 Decide on a limit for the amount of disk space available for MQTs. Generally, do not allocate more than 10% to 20% of the total system storage of a data warehouse for MQTs. Consider indexing the MQTs and execute RUNSTATS after index creation. Try to create an MQT that is generally useful to multiple queries. Often such an MQT is not a perfect match for a query and might require indexing. Replicated MQTs should have the same indexing design as the base table. Help the query compiler find matching MQTs. (MQT routing is complex.) Give the compiler as much information as possible by using the following techniques: o o o Keep statistics on the MQTs up-to-date. Use RI on foreign columns in the MQT. (To avoid system overhead, specify non-enforced RI.) Make FK columns NOT NULL. Avoid problematic MQT designs that make routing difficult. Try to avoid using EXISTS, NOT EXISTS, and SELECT DISTINCT. Unless the MQT is an exact match for a query, these predicates can make it difficult for the query compiler to make use of the MQT.

51 Physical Database Design Page 51 Post-design tools for improving designs for existing databases Explain facility best practices The explain facility can show you whether design features are being used. For example, it can show you whether indexes are being accessed in a QEP, whether partition elimination is being used, and whether queries are being routed to MQTs. Consider the fragment shown in Figure 12 of the QEP from the explain facility for query 20 of TPC-H 8. Figure 12 Fragment of QEP for Query 20 of TPC-H The QEP clearly shows that the information for PARTSUPP requires access to both the index TPCD.UXPS_PK2KSC and the PARTSUPP table itself. How can you determine the reason? 8 TPC-H: The TPC Benchmark H (TPC-H) is a decision support benchmark. It consists of a suite of business oriented ad-hoc queries and concurrent data modifications. The queries and the data populating the database have been chosen to have broad industry-wide relevance. This benchmark illustrates decision support systems that examine large volumes of data, execute queries with a high degree of complexity, and give answers to critical business questions.

52 Physical Database Design Page 52 Looking at operator (15) you can see that the FETCH statement requires access to the PARTSUPP table because the index includes PS_PARTKEY and PS_SUPPKEY columns, but does not include the PS_AVAILQTY column. This strongly suggests that by adding the PS_AVAILQTY column to this index, you can avoid accessing the PARTSUPP table in the subplan, thereby improving performance. The explain output shown in Figure 13 (from DB2 9.1) indicates which MQTs the optimizer considered but did not choose for a QEP, and explains why. The reason might be due to cost, or due to the fact that the MQT is not similar enough to be matched. Examples: explain plan for select c1, count(*) from t1 where c2 >= 10 group by c1; EXP0073W The following MQT or statistical view was not eligible because one or more data filtering predicates from the query could not be matched with the MQT: PKSCHO "."MQT2". EXP0073W The following MQT or statistical view was not eligible because one or more data filtering predicates from the query could not be matched with the MQT: PKSCHO "."MQT3". EXP0148W The following MQT or statistical view was considered in query matching: PKSCHO "."MQT1". EXP0148W The following MQT or statistical view was considered in query matching: PKSCHO "."MQT2". EXP0148W The following MQT or statistical view was considered in query matching: PKSCHO "."MQT3". EXP0149W The following MQT was used (from those considered) in query matching: PKSCHO "."MQT1". Figure 13 Using the explain facility to understand MQT selection Utilize the explain facility to help understand your design choices. DB2 Design Advisor best practices The DB2 Design Advisor is a key feature of the DB2 autonomic computing initiative. It is a push button solution: given a workload (user provided or system detected) and, optionally a disk constraint 9, the Design Advisor recommends physical database design options that are designed to optimize the execution of the workload provided. The Design Advisor performs extensive what-if analysis, data sampling, and correlation modeling to explore thousands of design permutations that humans cannot. The Design Advisor has the following capabilities: o o o o o Index selection MQT selection MDC selection Partitioning selection (for database partitioning) Industry-leading workload compression 9 disk constraint: A limit on the amount of disk space the Design Advisor can consider available for adding new design features. For example, the limit might be 100MB, and that would mean that the new design aspects recommended by the Design Advisor, such as additional indexes or MQTs, should not consume more than an additional 100MB in total.

53 Physical Database Design Page 53 Many customers have reported using the Design Advisor to make dramatic improvements in physical database design, leading to performance improvements of over five times for individual queries or entire workloads. Of course, you should not apply the results from the Design Advisor without due consideration. Figure 14 highlights the benefit of the Design Advisor. In this example, a decisionsupport database running the TPC-H workload and data set was created with a reasonable set of indexes, meaning that a good database designer could have come up with this set and considered it adequate. The Design Advisor was then used to provide additional recommendations for the database, which when applied resulted in a six-anda-half time performance gain. Figure 14 Benefits from DB2 Design Advisor MDC selection capability of the DB2 Design Advisor For improved workload performance, use the MDC selection capability of the Design Advisor to obtain recommended clustering dimensions for use in an MDC table, including coarsification on base columns. Only single-column dimensions, and not composite-column dimensions, are considered, although single or multiple dimensions can be recommended for the table. The MDC selection capability is enabled using the -m <advise type> flag on the db2advis utility. The advise types ( C for MDC and clustering indexes, I for index, M for MQT, and P for database partitioning) can be used in combination with each other.

Multidimensional Clustering (MDC) Tables in DB2 LUW. DB2Night Show. January 14, 2011

Multidimensional Clustering (MDC) Tables in DB2 LUW. DB2Night Show. January 14, 2011 Multidimensional Clustering (MDC) Tables in DB2 LUW DB2Night Show January 14, 2011 Pat Bates, IBM Technical Sales Professional, Data Warehousing Paul Zikopoulos, Director, IBM Information Management Client

More information

OLAP Introduction and Overview

OLAP Introduction and Overview 1 CHAPTER 1 OLAP Introduction and Overview What Is OLAP? 1 Data Storage and Access 1 Benefits of OLAP 2 What Is a Cube? 2 Understanding the Cube Structure 3 What Is SAS OLAP Server? 3 About Cube Metadata

More information

Data warehouse architecture consists of the following interconnected layers:

Data warehouse architecture consists of the following interconnected layers: Architecture, in the Data warehousing world, is the concept and design of the data base and technologies that are used to load the data. A good architecture will enable scalability, high performance and

More information

Best Practices. Deploying Optim Performance Manager in large scale environments. IBM Optim Performance Manager Extended Edition V4.1.0.

Best Practices. Deploying Optim Performance Manager in large scale environments. IBM Optim Performance Manager Extended Edition V4.1.0. IBM Optim Performance Manager Extended Edition V4.1.0.1 Best Practices Deploying Optim Performance Manager in large scale environments Ute Baumbach (bmb@de.ibm.com) Optim Performance Manager Development

More information

Chapter 12: Indexing and Hashing. Basic Concepts

Chapter 12: Indexing and Hashing. Basic Concepts Chapter 12: Indexing and Hashing! Basic Concepts! Ordered Indices! B+-Tree Index Files! B-Tree Index Files! Static Hashing! Dynamic Hashing! Comparison of Ordered Indexing and Hashing! Index Definition

More information

Evolving To The Big Data Warehouse

Evolving To The Big Data Warehouse Evolving To The Big Data Warehouse Kevin Lancaster 1 Copyright Director, 2012, Oracle and/or its Engineered affiliates. All rights Insert Systems, Information Protection Policy Oracle Classification from

More information

Chapter 12: Indexing and Hashing

Chapter 12: Indexing and Hashing Chapter 12: Indexing and Hashing Basic Concepts Ordered Indices B+-Tree Index Files B-Tree Index Files Static Hashing Dynamic Hashing Comparison of Ordered Indexing and Hashing Index Definition in SQL

More information

Part 1: Indexes for Big Data

Part 1: Indexes for Big Data JethroData Making Interactive BI for Big Data a Reality Technical White Paper This white paper explains how JethroData can help you achieve a truly interactive interactive response time for BI on big data,

More information

Deep Dive Into Storage Optimization When And How To Use Adaptive Compression. Thomas Fanghaenel IBM Bill Minor IBM

Deep Dive Into Storage Optimization When And How To Use Adaptive Compression. Thomas Fanghaenel IBM Bill Minor IBM Deep Dive Into Storage Optimization When And How To Use Adaptive Compression Thomas Fanghaenel IBM Bill Minor IBM Agenda Recap: Compression in DB2 9 for Linux, Unix and Windows New in DB2 10 for Linux,

More information

An Introduction to DB2 Indexing

An Introduction to DB2 Indexing An Introduction to DB2 Indexing by Craig S. Mullins This article is adapted from the upcoming edition of Craig s book, DB2 Developer s Guide, 5th edition. This new edition, which will be available in May

More information

Multi-Dimensional Clustering: a new data layout scheme in DB2

Multi-Dimensional Clustering: a new data layout scheme in DB2 Multi-Dimensional Clustering: a new data layout scheme in DB Sriram Padmanabhan Bishwaranjan Bhattacharjee Tim Malkemus IBM T.J. Watson Research Center 19 Skyline Drive Hawthorne, New York, USA {srp,bhatta,malkemus}@us.ibm.com

More information

Index Tuning. Index. An index is a data structure that supports efficient access to data. Matching records. Condition on attribute value

Index Tuning. Index. An index is a data structure that supports efficient access to data. Matching records. Condition on attribute value Index Tuning AOBD07/08 Index An index is a data structure that supports efficient access to data Condition on attribute value index Set of Records Matching records (search key) 1 Performance Issues Type

More information

Basics of Dimensional Modeling

Basics of Dimensional Modeling Basics of Dimensional Modeling Data warehouse and OLAP tools are based on a dimensional data model. A dimensional model is based on dimensions, facts, cubes, and schemas such as star and snowflake. Dimension

More information

This tutorial will help computer science graduates to understand the basic-to-advanced concepts related to data warehousing.

This tutorial will help computer science graduates to understand the basic-to-advanced concepts related to data warehousing. About the Tutorial A data warehouse is constructed by integrating data from multiple heterogeneous sources. It supports analytical reporting, structured and/or ad hoc queries and decision making. This

More information

EMC GREENPLUM MANAGEMENT ENABLED BY AGINITY WORKBENCH

EMC GREENPLUM MANAGEMENT ENABLED BY AGINITY WORKBENCH White Paper EMC GREENPLUM MANAGEMENT ENABLED BY AGINITY WORKBENCH A Detailed Review EMC SOLUTIONS GROUP Abstract This white paper discusses the features, benefits, and use of Aginity Workbench for EMC

More information

DATA MINING AND WAREHOUSING

DATA MINING AND WAREHOUSING DATA MINING AND WAREHOUSING Qno Question Answer 1 Define data warehouse? Data warehouse is a subject oriented, integrated, time-variant, and nonvolatile collection of data that supports management's decision-making

More information

Evolution of Database Systems

Evolution of Database Systems Evolution of Database Systems Krzysztof Dembczyński Intelligent Decision Support Systems Laboratory (IDSS) Poznań University of Technology, Poland Intelligent Decision Support Systems Master studies, second

More information

Segregating Data Within Databases for Performance Prepared by Bill Hulsizer

Segregating Data Within Databases for Performance Prepared by Bill Hulsizer Segregating Data Within Databases for Performance Prepared by Bill Hulsizer When designing databases, segregating data within tables is usually important and sometimes very important. The higher the volume

More information

Logical Design A logical design is conceptual and abstract. It is not necessary to deal with the physical implementation details at this stage.

Logical Design A logical design is conceptual and abstract. It is not necessary to deal with the physical implementation details at this stage. Logical Design A logical design is conceptual and abstract. It is not necessary to deal with the physical implementation details at this stage. You need to only define the types of information specified

More information

COGNOS (R) 8 GUIDELINES FOR MODELING METADATA FRAMEWORK MANAGER. Cognos(R) 8 Business Intelligence Readme Guidelines for Modeling Metadata

COGNOS (R) 8 GUIDELINES FOR MODELING METADATA FRAMEWORK MANAGER. Cognos(R) 8 Business Intelligence Readme Guidelines for Modeling Metadata COGNOS (R) 8 FRAMEWORK MANAGER GUIDELINES FOR MODELING METADATA Cognos(R) 8 Business Intelligence Readme Guidelines for Modeling Metadata GUIDELINES FOR MODELING METADATA THE NEXT LEVEL OF PERFORMANCE

More information

Data Mining Concepts & Techniques

Data Mining Concepts & Techniques Data Mining Concepts & Techniques Lecture No. 01 Databases, Data warehouse Naeem Ahmed Email: naeemmahoto@gmail.com Department of Software Engineering Mehran Univeristy of Engineering and Technology Jamshoro

More information

An Oracle White Paper June Exadata Hybrid Columnar Compression (EHCC)

An Oracle White Paper June Exadata Hybrid Columnar Compression (EHCC) An Oracle White Paper June 2011 (EHCC) Introduction... 3 : Technology Overview... 4 Warehouse Compression... 6 Archive Compression... 7 Conclusion... 9 Introduction enables the highest levels of data compression

More information

Data Warehousing 11g Essentials

Data Warehousing 11g Essentials Oracle 1z0-515 Data Warehousing 11g Essentials Version: 6.0 QUESTION NO: 1 Indentify the true statement about REF partitions. A. REF partitions have no impact on partition-wise joins. B. Changes to partitioning

More information

Presentation Abstract

Presentation Abstract Presentation Abstract From the beginning of DB2, application performance has always been a key concern. There will always be more developers than DBAs, and even as hardware cost go down, people costs have

More information

DATABASE PERFORMANCE AND INDEXES. CS121: Relational Databases Fall 2017 Lecture 11

DATABASE PERFORMANCE AND INDEXES. CS121: Relational Databases Fall 2017 Lecture 11 DATABASE PERFORMANCE AND INDEXES CS121: Relational Databases Fall 2017 Lecture 11 Database Performance 2 Many situations where query performance needs to be improved e.g. as data size grows, query performance

More information

CHAPTER 8: ONLINE ANALYTICAL PROCESSING(OLAP)

CHAPTER 8: ONLINE ANALYTICAL PROCESSING(OLAP) CHAPTER 8: ONLINE ANALYTICAL PROCESSING(OLAP) INTRODUCTION A dimension is an attribute within a multidimensional model consisting of a list of values (called members). A fact is defined by a combination

More information

Data about data is database Select correct option: True False Partially True None of the Above

Data about data is database Select correct option: True False Partially True None of the Above Within a table, each primary key value. is a minimal super key is always the first field in each table must be numeric must be unique Foreign Key is A field in a table that matches a key field in another

More information

Appliances and DW Architecture. John O Brien President and Executive Architect Zukeran Technologies 1

Appliances and DW Architecture. John O Brien President and Executive Architect Zukeran Technologies 1 Appliances and DW Architecture John O Brien President and Executive Architect Zukeran Technologies 1 OBJECTIVES To define an appliance Understand critical components of a DW appliance Learn how DW appliances

More information

1 DATAWAREHOUSING QUESTIONS by Mausami Sawarkar

1 DATAWAREHOUSING QUESTIONS by Mausami Sawarkar 1 DATAWAREHOUSING QUESTIONS by Mausami Sawarkar 1) What does the term 'Ad-hoc Analysis' mean? Choice 1 Business analysts use a subset of the data for analysis. Choice 2: Business analysts access the Data

More information

The DB2Night Show Episode #89. InfoSphere Warehouse V10 Performance Enhancements

The DB2Night Show Episode #89. InfoSphere Warehouse V10 Performance Enhancements The DB2Night Show Episode #89 InfoSphere Warehouse V10 Performance Enhancements Pat Bates, WW Technical Sales for Big Data and Warehousing, jpbates@us.ibm.com June 27, 2012 June 27, 2012 Multi-Core Parallelism

More information

Indexing. Week 14, Spring Edited by M. Naci Akkøk, , Contains slides from 8-9. April 2002 by Hector Garcia-Molina, Vera Goebel

Indexing. Week 14, Spring Edited by M. Naci Akkøk, , Contains slides from 8-9. April 2002 by Hector Garcia-Molina, Vera Goebel Indexing Week 14, Spring 2005 Edited by M. Naci Akkøk, 5.3.2004, 3.3.2005 Contains slides from 8-9. April 2002 by Hector Garcia-Molina, Vera Goebel Overview Conventional indexes B-trees Hashing schemes

More information

IMPROVING THE PERFORMANCE, INTEGRITY, AND MANAGEABILITY OF PHYSICAL STORAGE IN DB2 DATABASES

IMPROVING THE PERFORMANCE, INTEGRITY, AND MANAGEABILITY OF PHYSICAL STORAGE IN DB2 DATABASES IMPROVING THE PERFORMANCE, INTEGRITY, AND MANAGEABILITY OF PHYSICAL STORAGE IN DB2 DATABASES Ram Narayanan August 22, 2003 VERITAS ARCHITECT NETWORK TABLE OF CONTENTS The Database Administrator s Challenge

More information

1. Analytical queries on the dimensionally modeled database can be significantly simpler to create than on the equivalent nondimensional database.

1. Analytical queries on the dimensionally modeled database can be significantly simpler to create than on the equivalent nondimensional database. 1. Creating a data warehouse involves using the functionalities of database management software to implement the data warehouse model as a collection of physically created and mutually connected database

More information

Table Compression in Oracle9i Release2. An Oracle White Paper May 2002

Table Compression in Oracle9i Release2. An Oracle White Paper May 2002 Table Compression in Oracle9i Release2 An Oracle White Paper May 2002 Table Compression in Oracle9i Release2 Executive Overview...3 Introduction...3 How It works...3 What can be compressed...4 Cost and

More information

Systems Infrastructure for Data Science. Web Science Group Uni Freiburg WS 2014/15

Systems Infrastructure for Data Science. Web Science Group Uni Freiburg WS 2014/15 Systems Infrastructure for Data Science Web Science Group Uni Freiburg WS 2014/15 Lecture II: Indexing Part I of this course Indexing 3 Database File Organization and Indexing Remember: Database tables

More information

1 Dulcian, Inc., 2001 All rights reserved. Oracle9i Data Warehouse Review. Agenda

1 Dulcian, Inc., 2001 All rights reserved. Oracle9i Data Warehouse Review. Agenda Agenda Oracle9i Warehouse Review Dulcian, Inc. Oracle9i Server OLAP Server Analytical SQL Mining ETL Infrastructure 9i Warehouse Builder Oracle 9i Server Overview E-Business Intelligence Platform 9i Server:

More information

FIVE BEST PRACTICES FOR ENSURING A SUCCESSFUL SQL SERVER MIGRATION

FIVE BEST PRACTICES FOR ENSURING A SUCCESSFUL SQL SERVER MIGRATION FIVE BEST PRACTICES FOR ENSURING A SUCCESSFUL SQL SERVER MIGRATION The process of planning and executing SQL Server migrations can be complex and risk-prone. This is a case where the right approach and

More information

Teradata Aggregate Designer

Teradata Aggregate Designer Data Warehousing Teradata Aggregate Designer By: Sam Tawfik Product Marketing Manager Teradata Corporation Table of Contents Executive Summary 2 Introduction 3 Problem Statement 3 Implications of MOLAP

More information

Administração e Optimização de Bases de Dados 2012/2013 Index Tuning

Administração e Optimização de Bases de Dados 2012/2013 Index Tuning Administração e Optimização de Bases de Dados 2012/2013 Index Tuning Bruno Martins DEI@Técnico e DMIR@INESC-ID Index An index is a data structure that supports efficient access to data Condition on Index

More information

The strategic advantage of OLAP and multidimensional analysis

The strategic advantage of OLAP and multidimensional analysis IBM Software Business Analytics Cognos Enterprise The strategic advantage of OLAP and multidimensional analysis 2 The strategic advantage of OLAP and multidimensional analysis Overview Online analytical

More information

Data Warehouse and Data Mining

Data Warehouse and Data Mining Data Warehouse and Data Mining Lecture No. 07 Terminologies Naeem Ahmed Email: naeemmahoto@gmail.com Department of Software Engineering Mehran Univeristy of Engineering and Technology Jamshoro Database

More information

Oracle Hyperion Profitability and Cost Management

Oracle Hyperion Profitability and Cost Management Oracle Hyperion Profitability and Cost Management Configuration Guidelines for Detailed Profitability Applications November 2015 Contents About these Guidelines... 1 Setup and Configuration Guidelines...

More information

Rocky Mountain Technology Ventures

Rocky Mountain Technology Ventures Rocky Mountain Technology Ventures Comparing and Contrasting Online Analytical Processing (OLAP) and Online Transactional Processing (OLTP) Architectures 3/19/2006 Introduction One of the most important

More information

Full file at

Full file at Chapter 2 Data Warehousing True-False Questions 1. A real-time, enterprise-level data warehouse combined with a strategy for its use in decision support can leverage data to provide massive financial benefits

More information

Kathleen Durant PhD Northeastern University CS Indexes

Kathleen Durant PhD Northeastern University CS Indexes Kathleen Durant PhD Northeastern University CS 3200 Indexes Outline for the day Index definition Types of indexes B+ trees ISAM Hash index Choosing indexed fields Indexes in InnoDB 2 Indexes A typical

More information

Accelerating BI on Hadoop: Full-Scan, Cubes or Indexes?

Accelerating BI on Hadoop: Full-Scan, Cubes or Indexes? White Paper Accelerating BI on Hadoop: Full-Scan, Cubes or Indexes? How to Accelerate BI on Hadoop: Cubes or Indexes? Why not both? 1 +1(844)384-3844 INFO@JETHRO.IO Overview Organizations are storing more

More information

Chapter 11: Indexing and Hashing

Chapter 11: Indexing and Hashing Chapter 11: Indexing and Hashing Basic Concepts Ordered Indices B + -Tree Index Files B-Tree Index Files Static Hashing Dynamic Hashing Comparison of Ordered Indexing and Hashing Index Definition in SQL

More information

B.H.GARDI COLLEGE OF ENGINEERING & TECHNOLOGY (MCA Dept.) Parallel Database Database Management System - 2

B.H.GARDI COLLEGE OF ENGINEERING & TECHNOLOGY (MCA Dept.) Parallel Database Database Management System - 2 Introduction :- Today single CPU based architecture is not capable enough for the modern database that are required to handle more demanding and complex requirements of the users, for example, high performance,

More information

This module presents the star schema, an alternative to 3NF schemas intended for analytical databases.

This module presents the star schema, an alternative to 3NF schemas intended for analytical databases. Topic 3.3: Star Schema Design This module presents the star schema, an alternative to 3NF schemas intended for analytical databases. Star Schema Overview The star schema is a simple database architecture

More information

IBM DB2 LUW Performance Tuning and Monitoring for Single and Multiple Partition DBs

IBM DB2 LUW Performance Tuning and Monitoring for Single and Multiple Partition DBs IBM DB2 LUW Performance Tuning and Monitoring for Single and Multiple Partition DBs Day(s): 5 Course Code: CL442G Overview Learn how to tune for optimum the IBM DB2 9 for Linux, UNIX, and Windows relational

More information

Announcements. Reading Material. Recap. Today 9/17/17. Storage (contd. from Lecture 6)

Announcements. Reading Material. Recap. Today 9/17/17. Storage (contd. from Lecture 6) CompSci 16 Intensive Computing Systems Lecture 7 Storage and Index Instructor: Sudeepa Roy Announcements HW1 deadline this week: Due on 09/21 (Thurs), 11: pm, no late days Project proposal deadline: Preliminary

More information

Deploying an IBM Industry Data Model on an IBM Netezza data warehouse appliance

Deploying an IBM Industry Data Model on an IBM Netezza data warehouse appliance Deploying an IBM Industry Data Model on an IBM Netezza data warehouse appliance Whitepaper Page 2 About This Paper Contents Introduction Page 3 Transforming the Logical Data Model to a Physical Data Model

More information

ALTERNATE SCHEMA DIAGRAMMING METHODS DECISION SUPPORT SYSTEMS. CS121: Relational Databases Fall 2017 Lecture 22

ALTERNATE SCHEMA DIAGRAMMING METHODS DECISION SUPPORT SYSTEMS. CS121: Relational Databases Fall 2017 Lecture 22 ALTERNATE SCHEMA DIAGRAMMING METHODS DECISION SUPPORT SYSTEMS CS121: Relational Databases Fall 2017 Lecture 22 E-R Diagramming 2 E-R diagramming techniques used in book are similar to ones used in industry

More information

Question Bank. 4) It is the source of information later delivered to data marts.

Question Bank. 4) It is the source of information later delivered to data marts. Question Bank Year: 2016-2017 Subject Dept: CS Semester: First Subject Name: Data Mining. Q1) What is data warehouse? ANS. A data warehouse is a subject-oriented, integrated, time-variant, and nonvolatile

More information

CS317 File and Database Systems

CS317 File and Database Systems CS317 File and Database Systems http://commons.wikimedia.org/wiki/category:r-tree#mediaviewer/file:r-tree_with_guttman%27s_quadratic_split.png Lecture 10 Physical DBMS Design October 23, 2017 Sam Siewert

More information

CS614 - Data Warehousing - Midterm Papers Solved MCQ(S) (1 TO 22 Lectures)

CS614 - Data Warehousing - Midterm Papers Solved MCQ(S) (1 TO 22 Lectures) CS614- Data Warehousing Solved MCQ(S) From Midterm Papers (1 TO 22 Lectures) BY Arslan Arshad Nov 21,2016 BS110401050 BS110401050@vu.edu.pk Arslan.arshad01@gmail.com AKMP01 CS614 - Data Warehousing - Midterm

More information

7. Query Processing and Optimization

7. Query Processing and Optimization 7. Query Processing and Optimization Processing a Query 103 Indexing for Performance Simple (individual) index B + -tree index Matching index scan vs nonmatching index scan Unique index one entry and one

More information

Designing Data Warehouses. Data Warehousing Design. Designing Data Warehouses. Designing Data Warehouses

Designing Data Warehouses. Data Warehousing Design. Designing Data Warehouses. Designing Data Warehouses Designing Data Warehouses To begin a data warehouse project, need to find answers for questions such as: Data Warehousing Design Which user requirements are most important and which data should be considered

More information

SQL Server 2008 Consolidation

SQL Server 2008 Consolidation Technology Concepts and Business Considerations Abstract The white paper describes how SQL Server 2008 consolidation provides solutions to basic business problems pertaining to the usage of multiple SQL

More information

UNIT I. Introduction

UNIT I. Introduction UNIT I Introduction Objective To know the need for database system. To study about various data models. To understand the architecture of database system. To introduce Relational database system. Introduction

More information

Welcome to the presentation. Thank you for taking your time for being here.

Welcome to the presentation. Thank you for taking your time for being here. Welcome to the presentation. Thank you for taking your time for being here. In this presentation, my goal is to share with you 10 practical points that a single partitioned DBA needs to know to get head

More information

ORACLE DATA SHEET ORACLE PARTITIONING

ORACLE DATA SHEET ORACLE PARTITIONING Note: This document is for informational purposes. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development,

More information

Something to think about. Problems. Purpose. Vocabulary. Query Evaluation Techniques for large DB. Part 1. Fact:

Something to think about. Problems. Purpose. Vocabulary. Query Evaluation Techniques for large DB. Part 1. Fact: Query Evaluation Techniques for large DB Part 1 Fact: While data base management systems are standard tools in business data processing they are slowly being introduced to all the other emerging data base

More information

Was ist dran an einer spezialisierten Data Warehousing platform?

Was ist dran an einer spezialisierten Data Warehousing platform? Was ist dran an einer spezialisierten Data Warehousing platform? Hermann Bär Oracle USA Redwood Shores, CA Schlüsselworte Data warehousing, Exadata, specialized hardware proprietary hardware Introduction

More information

8) A top-to-bottom relationship among the items in a database is established by a

8) A top-to-bottom relationship among the items in a database is established by a MULTIPLE CHOICE QUESTIONS IN DBMS (unit-1 to unit-4) 1) ER model is used in phase a) conceptual database b) schema refinement c) physical refinement d) applications and security 2) The ER model is relevant

More information

Announcement. Reading Material. Overview of Query Evaluation. Overview of Query Evaluation. Overview of Query Evaluation 9/26/17

Announcement. Reading Material. Overview of Query Evaluation. Overview of Query Evaluation. Overview of Query Evaluation 9/26/17 Announcement CompSci 516 Database Systems Lecture 10 Query Evaluation and Join Algorithms Project proposal pdf due on sakai by 5 pm, tomorrow, Thursday 09/27 One per group by any member Instructor: Sudeepa

More information

A Novel Approach of Data Warehouse OLTP and OLAP Technology for Supporting Management prospective

A Novel Approach of Data Warehouse OLTP and OLAP Technology for Supporting Management prospective A Novel Approach of Data Warehouse OLTP and OLAP Technology for Supporting Management prospective B.Manivannan Research Scholar, Dept. Computer Science, Dravidian University, Kuppam, Andhra Pradesh, India

More information

Optimizing Testing Performance With Data Validation Option

Optimizing Testing Performance With Data Validation Option Optimizing Testing Performance With Data Validation Option 1993-2016 Informatica LLC. No part of this document may be reproduced or transmitted in any form, by any means (electronic, photocopying, recording

More information

Advanced Data Management Technologies Written Exam

Advanced Data Management Technologies Written Exam Advanced Data Management Technologies Written Exam 02.02.2016 First name Student number Last name Signature Instructions for Students Write your name, student number, and signature on the exam sheet. This

More information

Indexing. Chapter 8, 10, 11. Database Management Systems 3ed, R. Ramakrishnan and J. Gehrke 1

Indexing. Chapter 8, 10, 11. Database Management Systems 3ed, R. Ramakrishnan and J. Gehrke 1 Indexing Chapter 8, 10, 11 Database Management Systems 3ed, R. Ramakrishnan and J. Gehrke 1 Tree-Based Indexing The data entries are arranged in sorted order by search key value. A hierarchical search

More information

Interview Questions on DBMS and SQL [Compiled by M V Kamal, Associate Professor, CSE Dept]

Interview Questions on DBMS and SQL [Compiled by M V Kamal, Associate Professor, CSE Dept] Interview Questions on DBMS and SQL [Compiled by M V Kamal, Associate Professor, CSE Dept] 1. What is DBMS? A Database Management System (DBMS) is a program that controls creation, maintenance and use

More information

An Introduction to Big Data Formats

An Introduction to Big Data Formats Introduction to Big Data Formats 1 An Introduction to Big Data Formats Understanding Avro, Parquet, and ORC WHITE PAPER Introduction to Big Data Formats 2 TABLE OF TABLE OF CONTENTS CONTENTS INTRODUCTION

More information

Relational Database Index Design and the Optimizers

Relational Database Index Design and the Optimizers Relational Database Index Design and the Optimizers DB2, Oracle, SQL Server, et al. Tapio Lahdenmäki Michael Leach (C^WILEY- IX/INTERSCIENCE A JOHN WILEY & SONS, INC., PUBLICATION Contents Preface xv 1

More information

10/29/2013. Program Agenda. The Database Trifecta: Simplified Management, Less Capacity, Better Performance

10/29/2013. Program Agenda. The Database Trifecta: Simplified Management, Less Capacity, Better Performance Program Agenda The Database Trifecta: Simplified Management, Less Capacity, Better Performance Data Growth and Complexity Hybrid Columnar Compression Case Study & Real-World Experiences

More information

CHAPTER 3 Implementation of Data warehouse in Data Mining

CHAPTER 3 Implementation of Data warehouse in Data Mining CHAPTER 3 Implementation of Data warehouse in Data Mining 3.1 Introduction to Data Warehousing A data warehouse is storage of convenient, consistent, complete and consolidated data, which is collected

More information

Database Architectures

Database Architectures Database Architectures CPS352: Database Systems Simon Miner Gordon College Last Revised: 4/15/15 Agenda Check-in Parallelism and Distributed Databases Technology Research Project Introduction to NoSQL

More information

File Structures and Indexing

File Structures and Indexing File Structures and Indexing CPS352: Database Systems Simon Miner Gordon College Last Revised: 10/11/12 Agenda Check-in Database File Structures Indexing Database Design Tips Check-in Database File Structures

More information

FROM A RELATIONAL TO A MULTI-DIMENSIONAL DATA BASE

FROM A RELATIONAL TO A MULTI-DIMENSIONAL DATA BASE FROM A RELATIONAL TO A MULTI-DIMENSIONAL DATA BASE David C. Hay Essential Strategies, Inc In the buzzword sweepstakes of 1997, the clear winner has to be Data Warehouse. A host of technologies and techniques

More information

Oracle 1Z0-515 Exam Questions & Answers

Oracle 1Z0-515 Exam Questions & Answers Oracle 1Z0-515 Exam Questions & Answers Number: 1Z0-515 Passing Score: 800 Time Limit: 120 min File Version: 38.7 http://www.gratisexam.com/ Oracle 1Z0-515 Exam Questions & Answers Exam Name: Data Warehousing

More information

Exadata Implementation Strategy

Exadata Implementation Strategy Exadata Implementation Strategy BY UMAIR MANSOOB 1 Who Am I Work as Senior Principle Engineer for an Oracle Partner Oracle Certified Administrator from Oracle 7 12c Exadata Certified Implementation Specialist

More information

Chapter 12: Indexing and Hashing

Chapter 12: Indexing and Hashing Chapter 12: Indexing and Hashing Database System Concepts, 5th Ed. See www.db-book.com for conditions on re-use Chapter 12: Indexing and Hashing Basic Concepts Ordered Indices B + -Tree Index Files B-Tree

More information

Data Warehousing. Data Warehousing and Mining. Lecture 8. by Hossen Asiful Mustafa

Data Warehousing. Data Warehousing and Mining. Lecture 8. by Hossen Asiful Mustafa Data Warehousing Data Warehousing and Mining Lecture 8 by Hossen Asiful Mustafa Databases Databases are developed on the IDEA that DATA is one of the critical materials of the Information Age Information,

More information

Perform scalable data exchange using InfoSphere DataStage DB2 Connector

Perform scalable data exchange using InfoSphere DataStage DB2 Connector Perform scalable data exchange using InfoSphere DataStage Angelia Song (azsong@us.ibm.com) Technical Consultant IBM 13 August 2015 Brian Caufield (bcaufiel@us.ibm.com) Software Architect IBM Fan Ding (fding@us.ibm.com)

More information

Microsoft SQL Server Training Course Catalogue. Learning Solutions

Microsoft SQL Server Training Course Catalogue. Learning Solutions Training Course Catalogue Learning Solutions Querying SQL Server 2000 with Transact-SQL Course No: MS2071 Two days Instructor-led-Classroom 2000 The goal of this course is to provide students with the

More information

Oracle Essbase XOLAP and Teradata

Oracle Essbase XOLAP and Teradata Oracle Essbase XOLAP and Teradata Steve Kamyszek, Partner Integration Lab, Teradata Corporation 09.14 EB5844 ALLIANCE PARTNER Table of Contents 2 Scope 2 Overview 3 XOLAP Functional Summary 4 XOLAP in

More information

Database design View Access patterns Need for separate data warehouse:- A multidimensional data model:-

Database design View Access patterns Need for separate data warehouse:- A multidimensional data model:- UNIT III: Data Warehouse and OLAP Technology: An Overview : What Is a Data Warehouse? A Multidimensional Data Model, Data Warehouse Architecture, Data Warehouse Implementation, From Data Warehousing to

More information

SQL Server 2014 Column Store Indexes. Vivek Sanil Microsoft Sr. Premier Field Engineer

SQL Server 2014 Column Store Indexes. Vivek Sanil Microsoft Sr. Premier Field Engineer SQL Server 2014 Column Store Indexes Vivek Sanil Microsoft Vivek.sanil@microsoft.com Sr. Premier Field Engineer Trends in the Data Warehousing Space Approximate data volume managed by DW Less than 1TB

More information

Evaluation of relational operations

Evaluation of relational operations Evaluation of relational operations Iztok Savnik, FAMNIT Slides & Textbook Textbook: Raghu Ramakrishnan, Johannes Gehrke, Database Management Systems, McGraw-Hill, 3 rd ed., 2007. Slides: From Cow Book

More information

Chapter 12: Query Processing

Chapter 12: Query Processing Chapter 12: Query Processing Overview Catalog Information for Cost Estimation $ Measures of Query Cost Selection Operation Sorting Join Operation Other Operations Evaluation of Expressions Transformation

More information

DATABASE SCALABILITY AND CLUSTERING

DATABASE SCALABILITY AND CLUSTERING WHITE PAPER DATABASE SCALABILITY AND CLUSTERING As application architectures become increasingly dependent on distributed communication and processing, it is extremely important to understand where the

More information

Automating Information Lifecycle Management with

Automating Information Lifecycle Management with Automating Information Lifecycle Management with Oracle Database 2c The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated

More information

Data warehouses Decision support The multidimensional model OLAP queries

Data warehouses Decision support The multidimensional model OLAP queries Data warehouses Decision support The multidimensional model OLAP queries Traditional DBMSs are used by organizations for maintaining data to record day to day operations On-line Transaction Processing

More information

Clustering Techniques A Technical Whitepaper By Lorinda Visnick

Clustering Techniques A Technical Whitepaper By Lorinda Visnick Clustering Techniques A Technical Whitepaper By Lorinda Visnick 14 Oak Park Bedford, MA 01730 USA Phone: +1-781-280-4000 www.objectstore.net Introduction Performance of a database can be greatly impacted

More information

Why Is This Important? Overview of Storage and Indexing. Components of a Disk. Data on External Storage. Accessing a Disk Page. Records on a Disk Page

Why Is This Important? Overview of Storage and Indexing. Components of a Disk. Data on External Storage. Accessing a Disk Page. Records on a Disk Page Why Is This Important? Overview of Storage and Indexing Chapter 8 DB performance depends on time it takes to get the data from storage system and time to process Choosing the right index for faster access

More information

In-Memory Data Management Jens Krueger

In-Memory Data Management Jens Krueger In-Memory Data Management Jens Krueger Enterprise Platform and Integration Concepts Hasso Plattner Intitute OLTP vs. OLAP 2 Online Transaction Processing (OLTP) Organized in rows Online Analytical Processing

More information

Advanced Database Systems

Advanced Database Systems Lecture IV Query Processing Kyumars Sheykh Esmaili Basic Steps in Query Processing 2 Query Optimization Many equivalent execution plans Choosing the best one Based on Heuristics, Cost Will be discussed

More information

Efficient Bulk Deletes for Multi Dimensional Clustered Tables in DB2

Efficient Bulk Deletes for Multi Dimensional Clustered Tables in DB2 Efficient Bulk Deletes for Multi Dimensional Clustered Tables in DB2 Bishwaranjan Bhattacharjee, Timothy Malkemus IBM T.J. Watson Research Center Sherman Lau, Sean McKeough, Jo-anne Kirton Robin Von Boeschoten,

More information

Inputs. Decisions. Leads to

Inputs. Decisions. Leads to Chapter 6: Physical Database Design and Performance Modern Database Management 9 th Edition Jeffrey A. Hoffer, Mary B. Prescott, Heikki Topi 2009 Pearson Education, Inc. Publishing as Prentice Hall 1 Objectives

More information

Oracle Database 11g for Data Warehousing and Business Intelligence

Oracle Database 11g for Data Warehousing and Business Intelligence An Oracle White Paper September, 2009 Oracle Database 11g for Data Warehousing and Business Intelligence Introduction Oracle Database 11g is a comprehensive database platform for data warehousing and business

More information

Teradata Analyst Pack More Power to Analyze and Tune Your Data Warehouse for Optimal Performance

Teradata Analyst Pack More Power to Analyze and Tune Your Data Warehouse for Optimal Performance Data Warehousing > Tools & Utilities Teradata Analyst Pack More Power to Analyze and Tune Your Data Warehouse for Optimal Performance By: Rod Vandervort, Jeff Shelton, and Louis Burger Table of Contents

More information