MASSACHUSETTS INSTITUTE OF TECHNOLOGY Database Systems: Fall 2008 Quiz II

Similar documents
MASSACHUSETTS INSTITUTE OF TECHNOLOGY Database Systems: Fall 2009 Quiz I Solutions

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Database Systems: Fall 2015 Quiz I

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Database Systems: Fall 2017 Quiz I

MASSACHUSETTS INSTITUTE OF TECHNOLOGY

CSE 190D Spring 2017 Final Exam Answers

Department of Electrical Engineering and Computer Science MASSACHUSETTS INSTITUTE OF TECHNOLOGY Database Systems: Fall 2004.

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Database Systems: Fall 2008 Quiz I

Indexing. Jan Chomicki University at Buffalo. Jan Chomicki () Indexing 1 / 25

Storage hierarchy. Textbook: chapters 11, 12, and 13

User Perspective. Module III: System Perspective. Module III: Topics Covered. Module III Overview of Storage Structures, QP, and TM

CSE 190D Spring 2017 Final Exam

PS2 out today. Lab 2 out today. Lab 1 due today - how was it?

Advanced Database Systems

DATABASE PERFORMANCE AND INDEXES. CS121: Relational Databases Fall 2017 Lecture 11

Announcement. Reading Material. Overview of Query Evaluation. Overview of Query Evaluation. Overview of Query Evaluation 9/26/17

6.830 Problem Set 3 Assigned: 10/28 Due: 11/30

External Sorting Implementing Relational Operators

Column Stores vs. Row Stores How Different Are They Really?

Problem Set 2 Solutions

Parallel Nested Loops

Parallel Partition-Based. Parallel Nested Loops. Median. More Join Thoughts. Parallel Office Tools 9/15/2011

Cost-based Query Sub-System. Carnegie Mellon Univ. Dept. of Computer Science /615 - DB Applications. Last Class.

MASSACHUSETTS INSTITUTE OF TECHNOLOGY

Database Technology. Topic 7: Data Structures for Databases. Olaf Hartig.

CMPS 181, Database Systems II, Final Exam, Spring 2016 Instructor: Shel Finkelstein. Student ID: UCSC

CS222P Fall 2017, Final Exam

Chapter 12: Query Processing. Chapter 12: Query Processing

Evaluation of Relational Operations

MapReduce. Stony Brook University CSE545, Fall 2016

Chapter 13: Query Processing

University of Waterloo Midterm Examination Sample Solution

Why Is This Important? Overview of Storage and Indexing. Components of a Disk. Data on External Storage. Accessing a Disk Page. Records on a Disk Page

Chapter 12: Query Processing

CSE 344 Final Examination

Chapter 12: Query Processing

6.893 Database Systems: Fall Quiz 2 THIS IS AN OPEN BOOK, OPEN NOTES QUIZ. NO PHONES, NO LAPTOPS, NO PDAS, ETC. Do not write in the boxes below

CSE 544 Principles of Database Management Systems

CSE 444: Database Internals. Lectures 5-6 Indexing

! A relational algebra expression may have many equivalent. ! Cost is generally measured as total elapsed time for

Chapter 13: Query Processing Basic Steps in Query Processing

SQL QUERY EVALUATION. CS121: Relational Databases Fall 2017 Lecture 12

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Computer Systems Engineering: Spring Quiz 2

Administrivia. CS 133: Databases. Cost-based Query Sub-System. Goals for Today. Midterm on Thursday 10/18. Assignments

Administriva. CS 133: Databases. General Themes. Goals for Today. Fall 2018 Lec 11 10/11 Query Evaluation Prof. Beth Trushkowsky

Map-Reduce. Marco Mura 2010 March, 31th

Database System Concepts

Chapter 12: Query Processing

IMPORTANT: Circle the last two letters of your class account:

Overview of Implementing Relational Operators and Query Evaluation

University of Waterloo Midterm Examination Solution

Query Processing. Debapriyo Majumdar Indian Sta4s4cal Ins4tute Kolkata DBMS PGDBA 2016

7. Query Processing and Optimization

Spring 2017 EXTERNAL SORTING (CH. 13 IN THE COW BOOK) 2/7/17 CS 564: Database Management Systems; (c) Jignesh M. Patel,

R & G Chapter 13. Implementation of single Relational Operations Choices depend on indexes, memory, stats, Joins Blocked nested loops:

Modern Database Systems Lecture 1

Database Management Systems (COP 5725) Homework 3

Introduction to Data Management CSE 344

GLADE: A Scalable Framework for Efficient Analytics. Florin Rusu University of California, Merced

1.1 - Basics of Query Processing in SQL Server

CSE 544, Winter 2009, Final Examination 11 March 2009

Efficiency. Efficiency: Indexing. Indexing. Efficiency Techniques. Inverted Index. Inverted Index (COSC 488)

CompSci 516: Database Systems

CSE 530A. B+ Trees. Washington University Fall 2013

FLAT DATACENTER STORAGE. Paper-3 Presenter-Pratik Bhatt fx6568

Implementation of Relational Operations

Overview of Storage and Indexing

CPSC 421 Database Management Systems. Lecture 19: Physical Database Design Concurrency Control and Recovery

NOTE: sorting using B-trees to be assigned for reading after we cover B-trees.

6.824 Distributed System Engineering: Spring Quiz I Solutions

CS 222/122C Fall 2016, Midterm Exam

CMSC424: Database Design. Instructor: Amol Deshpande

Cost Models. the query database statistics description of computational resources, e.g.

CS122 Lecture 15 Winter Term,

INSTITUTO SUPERIOR TÉCNICO Administração e optimização de Bases de Dados

Andrew Pavlo, Erik Paulson, Alexander Rasin, Daniel Abadi, David DeWitt, Samuel Madden, and Michael Stonebraker SIGMOD'09. Presented by: Daniel Isaacs

Distributed File Systems II

University of California, Berkeley. (2 points for each row; 1 point given if part of the change in the row was correct)

Information Systems (Informationssysteme)

Data Modeling and Databases Ch 10: Query Processing - Algorithms. Gustavo Alonso Systems Group Department of Computer Science ETH Zürich

University of Massachusetts Amherst Department of Computer Science Prof. Yanlei Diao

CSE 444, Winter 2011, Final Examination. 17 March 2011

CS 564 Final Exam Fall 2015 Answers

Hash-Based Indexing 165

Data Modeling and Databases Ch 9: Query Processing - Algorithms. Gustavo Alonso Systems Group Department of Computer Science ETH Zürich

Classifying Physical Storage Media. Chapter 11: Storage and File Structure. Storage Hierarchy (Cont.) Storage Hierarchy. Magnetic Hard Disk Mechanism

Classifying Physical Storage Media. Chapter 11: Storage and File Structure. Storage Hierarchy. Storage Hierarchy (Cont.) Speed

Bigtable. Presenter: Yijun Hou, Yixiao Peng

CS 245 Midterm Exam Winter 2014

Name: 1. CS372H: Spring 2009 Final Exam

Introduction to Query Processing and Query Optimization Techniques. Copyright 2011 Ramez Elmasri and Shamkant Navathe

Introduction to Algorithms October 12, 2005 Massachusetts Institute of Technology Professors Erik D. Demaine and Charles E. Leiserson Quiz 1.

Heckaton. SQL Server's Memory Optimized OLTP Engine

Evaluation of relational operations

CSE 332 Autumn 2013: Midterm Exam (closed book, closed notes, no calculators)

Database Systems CSE 414

Engineering Goals. Scalability Availability. Transactional behavior Security EAI... CS530 S05

Faloutsos 1. Carnegie Mellon Univ. Dept. of Computer Science Database Applications. Outline

INSTITUTO SUPERIOR TÉCNICO Administração e optimização de Bases de Dados

Evaluation of Relational Operations. Relational Operations

CS 186 Databases. and SID:

Transcription:

Department of Electrical Engineering and Computer Science MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.830 Database Systems: Fall 2008 Quiz II There are 14 questions and 11 pages in this quiz booklet. To receive credit for a question, answer it according to the instructions given. You can receive partial credit on questions. You have 80 minutes to answer the questions. Write your name on this cover sheet AND at the bottom of each page of this booklet. Some questions may be harder than others. Attack them in the order that allows you to make the most progress. If you find a question ambiguous, be sure to write down any assumptions you make. Be neat. If we can t understand your answer, we can t give you credit! THIS IS AN OPEN BOOK, OPEN NOTES QUIZ. NO PHONES, NO LAPTOPS, NO PDAS, ETC. YOU MAY USE A CALCULATOR. Do not write in the boxes below 1-3 (xx/14) 4-6 (xx/26) 7-8 (xx/10) 9-11 (xx/26) 12-14 (xx/24) Total (xx/100) Solutions (Note: each value on the X-axis is the maximum bound of the 5-point bucket it represents) µ = 76.78, σ = 10.82

6.830 Fall 2008, Quiz 2 Page 2 of 11 I Short Answer 1. [4 points]: List two reasons why a column-oriented design is advantageous in a data warehouse environment. (Write your answer in the spaces below.) A. Data warehouse workloads involve large scans of a few columns, meaning that column-stores read less data from disk B. Columns tend to compress better than rows, which is good in a read intensive setting. Other answers are possible. 2. [4 points]: Which of the following statements about the MapReduce system are true: (Circle T or F for each statement.) T F A MapReduce job will complete even if one of the worker nodes fails. Worker tasks are restarted at other nodes when a failure occurs. T F A MapReduce job will complete even if the master node fails. There is only a single master node, so if it fails the entire MapReduce job must be restarted. T F Map workers send their results directly to reduce workers. Map workers write their results to their local disk. The reducers pull from the disks when needed. Due to ambiguity about if this is sending directly or not, this question was excluded. T F If there are M map tasks, using more than M map workers may improve MapReduce performance. MapReduce will start multiple copies of the last few map or reduce tasks, which can avoid issues with slow nodes.

6.830 Fall 2008, Quiz 2 Page 3 of 11 3. [6 points]: Which of the following statements about the two-phase commit (2PC) protocol are true: (Circle T or F for each statement.) T F In the no information case (i.e., when a subordinate asks the coordinator about the outcome of a transaction that the coordinator has no information about), the coordinator running the presumed abort version of 2PC returns the same response to a subordinate as it would in basic 2PC. Both versions answer ABORT. T F The Presumed Commit protocol reduces the number of forced writes required by the coordinator compared to basic 2PC for transactions that successfully commit. Presumed commit forces a COLLECTING record to disk (and on the root coordinator, a COMMIT record), while the traditional protocol forces a COMMIT record. T F The Presumed Abort protocol reduces the number of forced writes required of the coordinator compared to basic 2PC for transactions that abort. With presumed abort the coordinator does not need to force write any entries in the abort case. T F In basic 2PC, a subordinate that votes to abort a transaction T receives no more messages pertaining to T, assuming neither T s coordinator nor any of T s subordinate nodes fail. The coordinator does not need to alert the aborted subordinate about the abort.

6.830 Fall 2008, Quiz 2 Page 4 of 11 II C-Store Dana Bass is trying to understand the performance difference between row- and column- oriented databases. Her computer s disk can perform sequential I/O at 20 MB/sec, and takes 10 ms per seek; it has 10 MB of memory. She is performing range queries on a table with 10 4-byte integer columns, and 100 million rows. Data is not compressed in either system, and all columns are sorted in the same order. Assume each column is laid out completely sequentially on disk, as is the row-oriented table. 4. [16 points]: Fill in the table below with your estimates of the minimal I/O time for each operation, in a row-store and a column-store. Assume that all reads are on contiguous groups of rows, and that queries output row-oriented tuples. Solution: The cost for scanning sequential rows in a row-store is a seek to the first row, then a sequential scan of the remaining rows. We assume that the data can be processed as it is scanned - one thread sequentially loads tuples from disk while another applies any predicates or materialization in parallel. In this scenario, having 10MB of RAM is the same as having 1GB or 1MB of RAM. In a column-store, for each extra column one reads in parallel, the disk head must seek to a different 10 file containing that column at a given offset. Thus, the best strategy is to read in columns read MB of each column in to memory (incurring one seek for each such read), materialize the row, process it in memory, and move to the next sequential chunk of columns. In the special case where you only read one column, treat the behavior of the datbase as you would in a row-store with one column. No. Columns No. Rows Column-Oriented System Row-Oriented System Read Read 10Mrows 4bytes 1column 10Mrows 4bytes 10columns 1 10 Million 20MB/sec = 2sec 20MB/sec = 20sec + 10 ms seek + 10 ms seek 2.1 sec total 20.1 sec total 10 10 Million 20 sec sequential read (400MB of data) 20 sec, 10 ms seek 10MB RAM 1 MB/column 20.1 sec total read 400 MB in 1 MB chunks (same as above) 400 seeks * 10 msec = 4 sec 24 sec total

6.830 Fall 2008, Quiz 2 Page 5 of 11 III BigTable Consider an example table storing bank accounts in Google s BigTable system: Key Value adam $300 evan $600 sam $9000 The bank programmer implements the deposit function as follows: function deposit(account, amount): currentbalance = bigtable.get(account) currentbalance += amount bigtable.put(account, currentbalance) 5. [4 points]: Given the table above, a user executes deposit( sam, $500). The program crashes somewhere during execution. What are the possible states of the sam row in the table? Provide a short explanation for your answer. The row could either have sam = $9000 or sam = $9500, depending on if the program crashed before or after it wrote the result. 6. [6 points]: Given the table above, a user executes deposit( sam, $500). The program completes. Immediately after the program completes, the power goes out in the BigTable data center, and all the machines reboot. When BigTable recovers, what are the possible states of the sam row in the table? Provide a short explanation for your answer. The row can only have the value $9500, since BigTable does not inform a client that a write has been completed until it logs the update to disk on multiple physical machines. Thus, when BigTable recovers it will find the write in the log.

6.830 Fall 2008, Quiz 2 Page 6 of 11 Now consider the following pair of functions: function transfer(sourceaccount, destinationaccount, amount): sourcebalance = bigtable.get(sourceaccount) if sourcebalance < amount: raise Exception("not enough money to transfer") sourcebalance = amount function totalassets(): assets = 0 for account, balance in bigtable: assets += balance return assets destinationbalance = bigtable.get(destinationaccount) destinationbalance += amount bigtable.put(sourceaccount, sourcebalance) bigtable.put(destinationaccount, destinationbalance) 7. [6 points]: Given the table above, a program executes transfer( evan, adam, 200). While that function is executing, another program executes totalassets(). What return values from totalassets() are possible? For the purposes of this question, assume there is only one version of each value, and that the values in BigTable are stored and processed in the order given above (i.e., adam, evan, sam ). Provide a short explanation for your answer. $9,900 is possible if totalassets either reads all the balances before or after the transfer modifies any of them. $9,700 is possible if totalassets reads adam before the $200 is added, but reads evan after $200 is deducted. It might seem like $10,100 is possible, by reading adam after adding $200, and reading evan before the subtract. However, this is not possible. The transfer function performs the subtract, then the add. The scan in totalassets reads the rows in order. Thus, if it sees the modified version of adam it will see the modified version of evan. However, it would not be a good idea to write an application that depends on this behavour, as it depends on the order of the keys and the operations in the code, which could easily change. 8. [4 points]: If the data were stored in a traditional relational database system using strict two-phase locking, and transfer() and totalassets() were each implemented as single transactions, what would the possible results of totalassets() be? Provide a short explanation for your answer. $9,900 is the only possible solution because the DBMS will use concurrency control to execute the totalassets transaction either strictly before or after the transfer transaction.

6.830 Fall 2008, Quiz 2 Page 7 of 11 IV Distributed Query Execution Suppose you have a database with two tables with the following schema: T1 : (A int PRIMARY KEY, B int, C varchar) T2 : (D int REFERENCES T1.A, E varchar) And suppose you are running the following query: SELECT T1.A,COUNT(*) FROM T1,T2 AND T1.A = T2.D AND T1.B > n GROUP BY T1.A Your system has the following configuration: Parameter Description Value T1 Size of table T1 300 MB T2 Size of table T2 600 MB M Size of memory of a single node 100 MB T1 Number of records in T1 10 million int Size of integer, in bytes 4 T Disk throughput, in bytes / sec 20 MB/sec N Network transmit rate, in bytes / sec 10 MB/sec f Fraction of tuples selected by T1.B > n predicate 0.5

6.830 Fall 2008, Quiz 2 Page 8 of 11 9. [9 points]: Suppose you run the query on a single node. Assuming there are no indices, and the tables are not sorted, indicate what join algorithm (of the algorithms we have studied in class) would be most efficient, and estimate the time to process the query, ignoring disk seeks and CPU costs. Include a brief justification or calculation. (Write your answers in the spaces below.) Join algorithm: Read all of A (300MB), applying the predicate and projecting to keep only T1.A. Build an in-memory hash table with the results: 10 million tuples in T1 1 2 = 5 million output tuples 4 bytes per tuple = 20 MB of data. Sequentially scan T2 (600 MB), probing T2.D in the hash table. Emit T1.A for each matching join tuple, and build a hash aggregate to count the number of times T1.A appears (5 million output tuples (4 bytes id + 4 bytes count) = 40 MB). Finally iterate over the aggregate and output the results to the user. Estimated time, in seconds: Read T1 in 300MB 600MB 20MB/s = 15s. Read T2 in 20MB/s = 30s. Total time: 45 seconds. 10. [10 points]: Now suppose you switch to a cluster of 5 machines. If the above query is the only query running in the system, describe a partitioning for the tables and join algorithm that will provide the best performance, and estimate the total processing time (again, ignoring disk seeks and CPU cost). You may assume about the same number of T2.D values join with each T1.A value. Include a brief justification or calculation. You can assume the coordinator can receive an aggregate 50 MB/sec over the network from the five workers transmitting simultaneously at 10 MB/sec. (Write your answers in the spaces below.) Partitioning strategy: Hash tables on T1.A/T2.D. Each node will have approximately 60 MB of T1 and 120 MB of T2. 30 MB of T1 will pass the predicate. Join algorithm: Same as above, but with smaller result set on each machine. Final aggregation sent to coordinator. Since tables are properly partitioned, no networking is needed to process joins. Estimated time, in seconds: Each node has to read 180 MB from disk, which takes 9 seconds. Each node has 2 million T1 records on it, which are filtered to 1 million by the predicate (4 MB in-memory hash table for probing 4 byte integers). The aggregation takes 8 MB in RAM, since each integer also needs a 4-byte counter. Once the local aggregation is complete, the results are sent over the network to the coordinator in parallel with the other nodes at 10 MB/sec, taking 0.8 seconds to send over the network. Total time: 9.8 seconds.

6.830 Fall 2008, Quiz 2 Page 9 of 11 V Adaptive Query Processing Jenny Join, after hearing the lecture about adaptive query processing, decides to add several types of adaptivity to her database system. First, Jenny decides to add simple plan-based adaptivity in the style of the Kabra and DeWitt paper mentioned in the Adaptive Query Processing Through the Looking Glass paper and discussed in class. Jenny runs the following query: 1 2 Grace Hash A.f1 = B.f1 C 3 Grace Hash A.f2 = C.f1 Grace Hash A.f3 = D.f1 σ D Sort-based Aggregate A B 11. [6 points]: To help Jenny understand the Kabra and DeWitt approach, describe one way in which it might adapt to each of the following issues that arise during execution of the query shown above: (Write your answer in the space below each issue.) A. After running Join 3, the system observes that many fewer tuples will be input to the sort-based aggregate than predicted by the optimizer. If sufficient memory has been allocated to the aggregate node, it might switch the aggregate to a hashbased aggregate (or in-memory sort). B. After running Join 1, the system observes that many fewer tuples will be input to Join 2 than predicted by the optimizer. If sufficient memory has been allocated to the sort-merge join (Join 2), it might switch the join to an in-memory hash join. Switching Join 2 and Join 3 is likely not a good idea as the cardinality of the input to Join 2 has gone down, not up, so it is unlikely that Join 2 will now be more expensive than Join 3. C. After running Join 1, the optimizer predicts that many more tuples will be output by Join 2 than predicted before Join 1 ran (you may assume other optimizer estimates do not change significantly). If the cost of Join 2 is now predicted to be greater than Join 3, the system might switch Join 2 and Join 3.

6.830 Fall 2008, Quiz 2 Page 10 of 11 Now Jenny decides to add even more adaptivity to her system by allowing it to adapt between using secondary (unclustered) indices and sequential scans during query processing. Her query optimizer uses the following simple approximate cost model to estimate the time to perform a range scan of T tuples from a table A containing A disk pages, where the disk can read m pages per second and takes s seconds per seek. Assume that a disk page (either from the index or heap file) holds t tuples and that the scan is over a contiguous range of the key attribute of the index. For sequential scans, each range scan requires one seek to the beginning of the heap file plus a scan of the entire file: C seqscan = s + A m For a secondary (unclustered) B+Tree, each range scan involves reading T t m leaf index pages (assuming the top levels of the index are cached) plus one seek and one page read from the heap file per tuple (assuming no heap file pages are cached). C secondary btreescan = T t m + T (s + 1 m ) Jenny finds that, when there are correlations between the key of an secondary B+Tree and the key of a clustered B+Tree (or the sort order of a table), the performance of the secondary B+Tree is sometimes much better than the cost formula given above. 12. [6 points]: Explain why correlations between secondary and clustered index keys might overestimate the cost of the C secondary btreescan model given above. If there is a correlation between the secondary and clustered index, then consecutive tuples in the leaf pages of the secondary index may be consecutive (or nearly consecutive) in the clustered index. This will means that in reality there won t be seeks or page reads for every tuple, as in the model. 13. [6 points]: Using the parameters given above, give a better approximation of C secondary btreescan in the presence of correlations. If there is a perfect correlation, and assuming the cost model above for scanning the secondary index leaf pages is correct, then the cost will be approximately T t m + T t (s + 1 m ). We didvide T by t because there are t tuples per page, and we only have to seek and read additional pages from the clustered index every t tuples.

6.830 Fall 2008, Quiz 2 Page 11 of 11 Inspired by the idea of adaptive query processing, Jenny wants to devise a technique to exploit her observation about correlations without having to maintain correlation statistics between the clustered attribute and all attributes over which there is an secondary index. 14. [12 points]: Suggest an adaptive query processing technique that changes the plan while it is running to switch between a sequential scan and a (possibly correlated) secondary index, depending on which is better. Your solution should not require precomputing any additional statistics. Be sure to indicate how your approach avoids producing duplicate results when it switches plans. Start using secondary indices, assuming correlations are present. If the number of seeks performed while scanning some small fraction of the range (e.g., the first 1%) is close to what the original secondary model predicts, switch to sequential scan if the overall cost will be lower. Avoid duplicates by keeping a list of secondary index key values that the system has already scanned, and filtering out tuples in that list. Since the scan is done in order of the secondary index key, this list be efficiently represented as a range of keys (e.g., a (lower bound, upper bound) pair). End of Quiz II