COMP102: Introduction to Databases, 14 Dr Muhammad Sulaiman Khan Department of Computer Science University of Liverpool U.K. 8 March, 2011
Physical Database Design: Some Aspects
Specific topics for today: Purpose of physical database design. How to map the logical database design to a physical database design. How to design base tables for target DBMS. How to design representation of derived data. How to design business rules for target DBMS. How to analyze users transactions to determine characteristics that may impact performance. How to select appropriate file organizations based on analysis of transactions. When to select indexes to improve performance.
Logical vs Physical Database Design Logical DB design independent of implementation details, such as functionality of target DBMS. Logical DB design concerned with the what, physical database design is concerned with the how. Sources of information for physical design includes logical data model and data dictionary.
Physical Database Design Process of producing a description of implementation of the database on secondary storage. It describes: base tables, file organizations, and indexes used to achieve efficient access to the data, and any associated integrity constraints and security restrictions.
Overview of Physical Database Design Methodology Step 3: Translate logical database design for target DBMS. Step 4: Choose file organizations and indexes. Step 5: Design user views. Step 6: Design security mechanisms. Step 7: Consider introduction of controlled redundancy. Step 8: Monitor and tune operational system.
Step 3: Translate logical database design for target DBMS To produce a basic working relational database from the logical data model. Consists of the following steps: Step 3.1: Design base tables. Step 3.2: Design representation of derived data. Step 3.3: Design remaining business rules. Need to know functionality of target DBMS such as how to create base tables and whether DBMS supports the definition of: PKs, FKs, and AKs; required data i.e. whether system supports NOT NULL; domains; relational integrity rules; business rules.
Step 3.1: Design base tables To decide how to represent base tables identified in logical model in target DBMS. Need to collate and assimilate the information about tables produced during logical database design (from data dictionary and tables defined in DBDL). For each table, need to define: name of the table; list of simple columns in brackets; PK and, where appropriate, AKs and FKs. referential integrity constraints for any FKs identified. For each column, need to define: its domain, consisting of a data type, length, and any constraints on the domain; an optional default value for the column; whether the column can hold nulls.
Example: DBDL for the Branch table
Step 3.2: Design representation of derived data To design the representation of derived data in the database. Produce list of all derived columns from logical data model and data dictionary. Derived column can be stored in database or calculated every time it is needed. Option selected is based on: additional cost to store the derived data and keep it consistent with data from which it is derived; cost to calculate it each time it is required. Less expensive option is chosen subject to performance constraints.
Example: RentalAgreement and Member with derived column noofrentals:
Step 3.3: Design remaining business rules To design the remaining business rules for the target DBMS. Some DBMS provide more facilities than others for defining business rules. Example: When we create table RentalAgreement we can define: CONSTRAINT membernotrentingtoomany CHECK (NOT EXISTS (SELECT memberno FROM RentalAgreement GROUP BY memberno HAVING COUNT(*) > 10))
Step 4: Choose file organizations and indexes Determine optimal file organizations to store the base tables, and the indexes required to achieve acceptable performance. Consists of the following steps: Step 4.1: Analyze transactions. Step 4.2: Choose file organizations. Step 4.3: Choose indexes.
Step 4.1: Analyze transactions To understand functionality of the transactions and to analyze the important ones. Identify performance criteria, such as: transactions that run frequently and will have a significant impact on performance; transactions that are critical to the business; times during the day/week when there will be a high demand made on the database (called the peak load). Use this information to identify the parts of the database that may cause performance problems. To select appropriate file organizations and indexes, also need to know high-level functionality of the transactions, such as: columns that are updated in an update transaction; criteria used to restrict records that are retrieved in a query.
Step 4.1: Analyze transactions Often not possible to analyze all expected transactions, so investigate most important ones. To help identify which transactions to investigate, can use: transaction/table cross-reference matrix, showing tables that each transaction accesses, and/or transaction usage map, indicating which tables are potentially heavily used. To focus on areas that may be problematic: (1) Map all transaction paths to tables. (2) Determine which tables are most frequently accessed by transactions. (3) Analyze the data usage of selected transactions that involve these tables.
Example: Cross-referencing transactions and tables
Example: Transaction usage map for some sample transactions showing expected occurrences:
Step 4.1: Analyze transactions Data usage analysis For each transaction determine: (a) Tables and columns accessed and type of access. (b) Columns used in any search conditions. (c) For query, columns involved in joins. (d) Expected frequency of transaction. (e) Performance goals of transaction.
Example: Example Transaction Analysis Form:
Step 4.2: Choose file organizations To determine an efficient file organization for each base table. File organizations include: Heap, Hash, Indexed Sequential Access Method (ISAM), B+-Tree, and Clusters. Some DBMSs (particularly PC-based DBMS) have fixed file organization that you cannot alter.
Indexes motivating example Linear search: If records are not ordered, then to find a specific record we need to potentially search all the records. If the total number of records is n then this takes time proportional to n. If records are ordered (sorted) by the key value like alphabetically sorted words in a dictionary search can be much more efficient. Binary search: If records are sorted by the key value, and we search for a specific record with key K, we first look at the middle record with key, say, K. Compare K with K. If K = K then we are done. If not, then if K < K then we reduce search to only left half of the records; if K > K then we reduce search to only right half of the records. If the total number of records is n then binary search takes time proportional to log 2 (n) (logarithm base 2 of n). Suppose that the total number of records is n = 1024. If one camparison operation, e.g., operation like K < K, K = K, or K > K, takes time, say, 0.1 second, then linear search takes time roughly n/10 seconds = 102.4 seconds (almost 2 minutes), but the binary search takes time roughly log 2 (n)/10 seconds = log 2 (1024)/10 seconds = 1 second!!! To compute log 2 (1024), we just count how many times 1024 = 2 10 should be divided by 2 until we get 1. Thus log 2 (1024) = 10.
Step 4.3: Choose indexes Determine whether adding indexes will improve the performance of the system. One approach is to keep records unordered and create as many secondary indexes as necessary. Or could order records in table by specifying a primary or clustering index. In this case, choose the column for ordering or clustering the records as: column that is used most often for join operations - this makes join operation more efficient, or column that is used most often to access the records in a table in order of that column.
Step 4.3: Choose indexes If ordering column chosen is key of table, index will be a primary index; otherwise, index will be a clustering index. Each table can only have either a primary index or a clustering index. Secondary indexes provide additional keys for a base table that can be used to retrieve data more efficiently. Have to balance overhead in maintenance and use of secondary indexes against performance improvement gained when retrieving data. This includes: adding an index record to every secondary index whenever record is inserted; updating a secondary index when corresponding record is updated; increase in disk space needed to store the secondary index; possible performance degradation during query optimization to consider all secondary indexes.
Step 4.3: Choose indexes Guidelines for choosing wish-list (1) Do not index small tables. (2) Index PK of a table if it is not a key of the file organization. (3) Add secondary index to any column that is heavily used as a secondary key. (4) Add secondary index to a FK if it is frequently accessed. (5) Add secondary index on columns that are involved in: selection or join criteria; ORDER BY; GROUP BY; and other operations involving sorting (such as UNION or DISTINCT). (6) Add secondary index on columns involved in built-in functions. (7) Add secondary index on columns that could result in an index-only plan. (8) Avoid indexing an column or table that is frequently updated. (9) Avoid indexing an column if the query will retrieve a significant proportion of the records in the table. (10) Avoid indexing columns that consist of long character strings.
Creating indexes in SQL To create a primary index on the Video table based on column catalogno: CREATE UNIQUE INDEX catalognoprimaryindex ON Video(catalogNo); To create a secondary index on the Video table based on column title: CREATE INDEX catalognosecondaryindex ON Video(title); To create a clustering index on the VideoForRent table based on column catalogno: CREATE INDEX catalognoclusteringindex ON VideoForRent(catalogNo) CLUSTER; To drop (delete) an index: DROP INDEX catalognoprimaryindex;