Database System Concepts

Similar documents
Chapter 13: Query Processing

Chapter 12: Query Processing

! A relational algebra expression may have many equivalent. ! Cost is generally measured as total elapsed time for

Chapter 13: Query Processing Basic Steps in Query Processing

Query Processing. Debapriyo Majumdar Indian Sta4s4cal Ins4tute Kolkata DBMS PGDBA 2016

Chapter 12: Query Processing. Chapter 12: Query Processing

Advanced Database Systems

Chapter 12: Query Processing

Query Processing & Optimization

Advanced Databases. Lecture 1- Query Processing. Masood Niazi Torshiz Islamic Azad university- Mashhad Branch

CSIT5300: Advanced Database Systems

Shuigeng Zhou. May 18, 2016 School of Computer Science Fudan University

Query processing and optimization

Ch 5 : Query Processing & Optimization

DBMS Y3/S5. 1. OVERVIEW The steps involved in processing a query are: 1. Parsing and translation. 2. Optimization. 3. Evaluation.

CMSC424: Database Design. Instructor: Amol Deshpande

Database System Concepts

Introduction to Query Processing and Query Optimization Techniques. Copyright 2011 Ramez Elmasri and Shamkant Navathe

Lecture 19 Query Processing Part 1

Query Evaluation! References:! q [RG-3ed] Chapter 12, 13, 14, 15! q [SKS-6ed] Chapter 12, 13!

Review. Support for data retrieval at the physical level:

Algorithms for Query Processing and Optimization. 0. Introduction to Query Processing (1)

Query Optimization. Query Optimization. Optimization considerations. Example. Interaction of algorithm choice and tree arrangement.

Announcement. Reading Material. Overview of Query Evaluation. Overview of Query Evaluation. Overview of Query Evaluation 9/26/17

Chapter 12: Query Processing

Faloutsos 1. Carnegie Mellon Univ. Dept. of Computer Science Database Applications. Outline

Chapter 3. Algorithms for Query Processing and Optimization

Database Applications (15-415)

Query Processing & Optimization. CS 377: Database Systems

Distributed Query Processing. Banking Example

What happens. 376a. Database Design. Execution strategy. Query conversion. Next. Two types of techniques

Implementation of Relational Operations: Other Operations

Query Processing Strategies and Optimization

Chapter 17: Parallel Databases

CMSC424: Database Design. Instructor: Amol Deshpande

Evaluation of Relational Operations

CS 4604: Introduction to Database Management Systems. B. Aditya Prakash Lecture #10: Query Processing

Database System Concepts

Query Processing. Introduction to Databases CompSci 316 Fall 2017

Evaluation of Relational Operations: Other Techniques

Introduction Alternative ways of evaluating a given query using

Query Optimization. Shuigeng Zhou. December 9, 2009 School of Computer Science Fudan University

Chapter 14 Query Optimization

Evaluation of relational operations

Chapter 14 Query Optimization

Chapter 14 Query Optimization

Evaluation of Relational Operations: Other Techniques

DBMS Query evaluation

CAS CS 460/660 Introduction to Database Systems. Query Evaluation II 1.1

Chapter 11: Indexing and Hashing

Evaluation of Relational Operations: Other Techniques. Chapter 14 Sayyed Nezhadi

Hash-Based Indexing 165

15-415/615 Faloutsos 1

Outline. Query Processing Overview Algorithms for basic operations. Query optimization. Sorting Selection Join Projection

Database Applications (15-415)

CS122 Lecture 15 Winter Term,

Advances in Data Management Query Processing and Query Optimisation A.Poulovassilis

Overview of Query Evaluation. Chapter 12

University of Waterloo Midterm Examination Sample Solution

Chapter 11: Indexing and Hashing" Chapter 11: Indexing and Hashing"

Chapter 12: Indexing and Hashing. Basic Concepts

Chapter 12: Indexing and Hashing

Administrivia. CS 133: Databases. Cost-based Query Sub-System. Goals for Today. Midterm on Thursday 10/18. Assignments

! Parallel machines are becoming quite common and affordable. ! Databases are growing increasingly large

Chapter 20: Parallel Databases

Chapter 20: Parallel Databases. Introduction

5.3 Parser and Translator. 5.3 Parser and Translator. 5.3 Parser and Translator. 5.3 Parser and Translator. 5.3 Parser and Translator

Something to think about. Problems. Purpose. Vocabulary. Query Evaluation Techniques for large DB. Part 1. Fact:

Storage hierarchy. Textbook: chapters 11, 12, and 13

Chapter 11: Indexing and Hashing

Implementation of Relational Operations. Introduction. CS 186, Fall 2002, Lecture 19 R&G - Chapter 12

Chapter 18: Parallel Databases

Chapter 18: Parallel Databases. Chapter 18: Parallel Databases. Parallelism in Databases. Introduction

QUERY EXECUTION: How to Implement Relational Operations?

Relational Database Systems 2 5. Query Processing

Database System Concepts, 6 th Ed. Silberschatz, Korth and Sudarshan See for conditions on re-use

Relational Database Systems 2 5. Query Processing

Chapter 11: Indexing and Hashing

Statistics. Duplicate Elimination

CSE 530A. B+ Trees. Washington University Fall 2013

Evaluation of Relational Operations

TotalCost = 3 (1, , 000) = 6, 000

SQL QUERY EVALUATION. CS121: Relational Databases Fall 2017 Lecture 12

Chapter 12: Indexing and Hashing

Relational Query Optimization. Overview of Query Evaluation. SQL Refresher. Yanlei Diao UMass Amherst October 23 & 25, 2007

Chapter 14: Query Optimization

Overview of Query Processing and Optimization

Fundamentals of Database Systems

Query Processing. Solutions to Practice Exercises Query:

CS122 Lecture 4 Winter Term,

Indexing. Week 14, Spring Edited by M. Naci Akkøk, , Contains slides from 8-9. April 2002 by Hector Garcia-Molina, Vera Goebel

Administriva. CS 133: Databases. General Themes. Goals for Today. Fall 2018 Lec 11 10/11 Query Evaluation Prof. Beth Trushkowsky

Chapter 18 Strategies for Query Processing. We focus this discussion w.r.t RDBMS, however, they are applicable to OODBS.

Database Systems. Announcement. December 13/14, 2006 Lecture #10. Assignment #4 is due next week.

Implementing Relational Operators: Selection, Projection, Join. Database Management Systems, R. Ramakrishnan and J. Gehrke 1

Database System Concepts

Evaluation of Relational Operations: Other Techniques

1.1 - Basics of Query Processing in SQL Server

CMPUT 391 Database Management Systems. Query Processing: The Basics. Textbook: Chapter 10. (first edition: Chapter 13) University of Alberta 1

Overview of Implementing Relational Operators and Query Evaluation

Transcription:

Chapter 13: Query Processing s Departamento de Engenharia Informática Instituto Superior Técnico 1 st Semester 2008/2009 Slides (fortemente) baseados nos slides oficiais do livro c Silberschatz, Korth and Sudarshan.

Outline 1 s 2 3 4 5 6 s 7

Outline 1 s 2 3 4 5 6 s 7

Basic Steps in Query Processing 1 Parsing and translation 2 Optimization 3 Evaluation s

Basic Steps in Query Processing (cont.) s Parsing and translation Translate the query into its internal form and then into relational algebra Parser checks syntax and verifies relations Optimization Amongst all equivalent evaluation plans choose the one with lowest cost Cost is estimated using statistical information from the database catalog, such as the number of tuples in each relation, size of tuples, etc. Evaluation The query-execution engine takes a query-evaluation plan, executes that plan, and returns the answers to the query

Evaluation Plans s A relational algebra expression may have many equivalent expressions. For instance: σ balance<2500 (π balance (account)) π balance (σ balance<2500 (account)) Each relational algebra operation can be evaluated using one of several different algorithms Correspondingly, a relational-algebra expression can be evaluated in many ways Annotated expression specifying detailed evaluation strategy is called an evaluation-plan E.g., can use an index on balance to find accounts with balance < 2500 or can perform complete relation scan and discard accounts with balance 2500

Outline 1 s 2 3 4 5 6 s 7

Measures of s Cost is generally measured as total elapsed time for answering query Many factors contribute to time cost disk accesses, CPU, or even network communication Typically disk access is the predominant cost, and is also relatively easy to estimate, taking into account: Number of seeks average-seek-cost Number of blocks read average-block-read-cost Number of blocks written average-block-write-cost Cost to write a block is greater than cost to read a block because data is read back after being written to ensure that the write was successful

Measures of (cont.) s For simplicity we just use the number of block transfers from disk and the number of seeks as the cost measures t T - time to transfer one block t S - time for one seek Cost for b block transfers plus S seeks: b t T + S t S For simplicity We ignore CPU costs We do not include cost to writing output to disk We use worst case estimates Several algorithms can reduce disk IO by using extra buffer space Amount of real memory available to buffer depends on other concurrent queries and OS processes We do not take into account buffered data Required data may be buffer resident already, avoiding disk I/O But hard to take into account for cost estimation

Outline 1 File Scan Algorithms Index Scan Algorithms Comparisons Composite Selections s 2 3 4 5 6 s 7

File Scan Algorithms File Scan Algorithms Index Scan Algorithms Comparisons Composite Selections s File scan - search algorithms that locate and retrieve records that fulfill a selection condition Algorithm A1: linear search Scan each file block and test all records to see whether they satisfy the selection condition Cost = b r block transfers + 1 seek b r denotes number of blocks containing records from relation r If selection is on a key attribute, can stop on finding record cost = (b r/2) block transfers + 1 seek Linear search can be applied regardless of selection condition or ordering of records in the file, or availability of indices

File Scan Algorithms (cont.) File Scan Algorithms Index Scan Algorithms Comparisons Composite Selections s Algorithm A2: binary search Applicable if selection is an equality comparison on the attribute on which file is ordered Assume that the blocks of a relation are stored contiguously Cost estimate (number of disk blocks to be scanned): cost of locating the first tuple by a binary search on the blocks log 2 (b r ) (t T + t S ) If there are multiple records satisfying the selection Add transfer cost of the number of blocks containing records that satisfy selection condition

Selections Using Indices I Index scan - search algorithms that use an index selection condition must be on search-key of index File Scan Algorithms Index Scan Algorithms Comparisons Composite Selections s Algorithm A3: primary index on candidate key, equality Retrieve a single record that satisfies the corresponding equality condition cost = (h i + 1) (t T + t S ) h i = height of the tree Algorithm A4: primary index on nonkey, equality Records will be on consecutive blocks cost = h i (t T + t S ) + t S + t T b where b is the number of blocks containing matching records

Selections Using Indices II File Scan Algorithms Index Scan Algorithms Comparisons Composite Selections s Algorithm A5: equality on search-key of secondary index Retrieve a single record if the search-key is a candidate key cost = (h i + 1) (t T + t S ) Retrieve multiple records if search-key is not a candidate key cost = (h i + n) (t T + t S ) Can be very expensive, since each of n matching records may be on a different block

Selections Involving Comparisons I File Scan Algorithms Index Scan Algorithms Comparisons Composite Selections s Can implement selections of the form σ A V (r) or σ A V (r) by using a linear file scan or binary search, or by using indices in the following ways: Algorithm A6: primary index, comparison (relation sorted on A) For σ A V (r) use index to find first tuple V and scan relation sequentially from there For σ A V (r) just scan relation sequentially till first tuple > V; do not use index

Selections Involving Comparisons II File Scan Algorithms Index Scan Algorithms Comparisons Composite Selections s Algorithm A7: secondary index, comparison For σ A V (r) use index to find first index entry V and scan index sequentially from there, to find pointers to records For σ A V (r) just scan leaf pages of index finding pointers to records, till first entry > v In either case, retrieve records that are pointed to requires an I/O for each record linear file scan may be cheaper

Selections Involving Conjunctions I File Scan Algorithms Index Scan Algorithms Comparisons Composite Selections s For selections of the form σ θ1 θ 2 θ n (r) Algorithm A8: conjunctive selection using one index Select a combination of θ i and algorithms A1 through A7 that results in the least cost for σ θi (r) Test other conditions on tuple after fetching it into memory buffer Algorithm A9: conjunctive selection using multiple-key index Use appropriate composite (multiple-key) index if available

Selections Involving Conjunctions II File Scan Algorithms Index Scan Algorithms Comparisons Composite Selections Algorithm A10: conjunctive selection by intersection of identifiers Requires indices with record pointers Use corresponding index for each condition, and take intersection of all the obtained sets of record pointers Then fetch records from file If some conditions do not have appropriate indices, apply test in memory s

Selections Involving Disjunctions File Scan Algorithms Index Scan Algorithms Comparisons Composite Selections s For selections of the form σ θ1 θ 2 θ n (r) Algorithm A11: disjunctive selection by union of identifiers Applicable if all conditions have available indices wise use linear scan Use corresponding index for each condition, and take union of all the obtained sets of record pointers. Then fetch records from file

Selections Involving Negation File Scan Algorithms Index Scan Algorithms Comparisons Composite Selections For selections of the form σ θ (r) This is simply the set of tuples that are not in σ θ (r) May need to use linear scan on file s

Outline 1 s 2 3 4 5 6 s 7

Sorting s We may build an index on the relation, and then use the index to read the relation in sorted order May lead to one disk block access for each tuple. For relations that fit in memory, techniques like quicksort can be used For relations that don t fit in memory, external sort-merge is a good choice

External Sort-Merge s Let M denote memory size (in blocks) 1 Create sorted runs 2 Merge the runs Creating sorted runs Let i be 0 Repeatedly do the following until the end of the relation: 1 Read M blocks of relation into memory 2 Sort the in-memory blocks 3 Write sorted data to run R i 4 Increment i Let the final value of i be N

External Sort-Merge (cont.) s Merging the runs (N-way merge) We assume (for now) that N < M Use N blocks of memory to buffer input runs, and 1 block to buffer output Read the first block of each run into its buffer page Repeat 1 Select the first record (in sort order) among all buffer pages 2 Write the record to the output buffer; if the output buffer is full write it to disk 3 Delete the record from its input buffer page 4 If the buffer page becomes empty read the next block (if any) of the run into the buffer Until all input buffer pages are empty

External Sort-Merge (cont.) s If N M, several merge passes are required In each pass, contiguous groups of M 1 runs are merged A pass reduces the number of runs by a factor of M 1, and creates runs longer by the same factor. E.g. If M = 11, and there are 90 runs, one pass reduces the number of runs to 9, each 10 times the size of the initial runs Repeated passes are performed till all runs have been merged into one

An Example s

Cost Analysis s Total number of merge passes required: log M 1 ( b r M ) Block transfers for initial run creation as well as in each pass is 2b r For final pass, we don t count write cost We ignore final write cost for all operations since the output of an operation may be sent to the parent operation without being written to disk Thus total number of block transfers for external sorting: b r (2 log M 1 ( b r ) + 1) M

Cost Analysis Seeks s During run generation: one seek to read each run and one seek to write each run br 2 M During the merge phase Buffer size: b b (read/write b b blocks at a time) Need 2 b r /b b seeks for each merge pass except the final one which does not require a write Total number of seeks: br br 2 + M b b ( 2 ( ) ) br log M 1 1 M

Outline 1 Block Indexed Merge- Hash Complex s s 2 3 4 5 6 s 7

Algorithms Block Indexed Merge- Hash Complex s s Several different algorithms to implement joins Nested-loop join Block nested-loop join Indexed nested-loop join Merge-join Hash-join The choice is based on cost estimates The next examples use the following information: Number of records of customer: 10000 depositor: 5 000 Number of blocks of customer: 400 depositor: 100

Block Indexed Merge- Hash Complex s s To compute r θ s for each tuple t r in r do for each tuple t s in s do test pair (t r, t s ) for the join condition θ if θ is satisfied, add (t r, t s ) to the result end end r is called the outer relation and s the inner relation of the join Requires no indices and can be used with any kind of join condition Expensive since it examines every pair of tuples in the two relations

Cost Analysis Block Indexed Merge- Hash Complex s s In the worst case, if there is enough memory only to hold one block of each relation, the estimated cost is n r b s + b r block transfers, plus n r + b r seeks If the smaller relation fits entirely in memory, use that as the inner relation Reduces cost to b r + b s block transfers and 2 seeks Assuming worst case memory availability cost estimate is: with depositor as outer relation: 5000 400 + 100 = 2000100 block transfers, 5000 + 100 = 5100 seeks with customer as the outer relation: 10000 100 + 400 = 1000400 block transfers and 10400 seeks If smaller relation (depositor) fits entirely in memory, the cost estimate will be 500 block transfers

Block Block Indexed Merge- Hash Complex s s Variant of nested-loop join in which every block of inner relation is paired with every block of outer relation To compute r θ s for each block B r of r do for each block B s of s do for each tuple t r in B r do for each tuple t s in B s do check if (t r, t s ) satisfy the join condition if they do, add (t r, t s ) to the result end end end end

Cost Analysis Block Indexed Merge- Hash Complex s s Worst case estimate: b r b s + b r block transfers + 2 b r seeks Each block in the inner relation s is read once for each block in the outer relation Best case: b r + b s block transfers + 2 seeks

Improvements to Algorithms Block Indexed Merge- Hash Complex s s In block nested-loop, use M 2 disk blocks as blocking unit for outer relations; use remaining two blocks to buffer inner relation and output Cost = b r /(M 2) b s + b r block transfers, 2 b r /(M 2) seeks If equi-join attribute forms a key on inner relation, stop inner loop on first match Scan inner loop forward and backward alternately, to make use of the blocks remaining in buffer (with LRU replacement) Use index on inner relation if available

Indexed Block Indexed Merge- Hash Complex s s Index lookups can replace file scans if is an equi-join or natural join and An index is available on the inner relation s join attribute Can construct an index just to compute a join For each tuple t r in the outer relation r, use the index to look up tuples in s that satisfy the join condition with tuple t r Worst case: buffer has space for only one block of r, and, for each tuple in r, we perform an index lookup on s. Cost of the join: b r (t T + t S ) + n r c Where c is the cost of traversing index and fetching all matching s tuples for one tuple of r c can be estimated as cost of a single selection on s using the join condition If indices are available on join attributes of both r and s, use the relation with fewer tuples as the outer relation

An Example Block Indexed Merge- Hash Complex s s Compute depositor customer (depositor as the outer relation) Primary B + -tree index on customer, with 20 entries in each node customer 10 000 tuples (400 blocks, tree height = 4) depositor 5000 tuples (100 blocks) Cost of block nested-loop join 100 400 + 100 = 40100 block transfers + 2 100 = 200 seeks assuming worst case memory may be significantly less with more memory Cost of indexed nested-loop join 100 + 5000 5 = 25 100 block transfers and seeks CPU cost likely to be less than that for block nested loops join

Merge- Block Indexed Merge- Hash Complex s s Sort both relations on their join attribute (if not already sorted) Merge the sorted relations to join them step is similar to the merge stage of the sort-merge algorithm Main difference is handling of duplicate values in join attribute every pair with same value on join attribute must be matched

Cost Analysis Block Indexed Merge- Hash Complex s s Can be used only for equi-joins and natural joins Each block needs to be read only once Thus the cost of merge join is: b r + b s block transfers + b r /b b + b s /b b seeks + the cost of sorting, if relations are unsorted

Hash Block Indexed Merge- Hash Complex s s Applicable for equi-joins and natural joins. A hash function h is used to partition tuples of both relations h maps A values to {0, 1,...,n}, where A denotes the attributes of r and s used in the join r 0,r 1,...,r n denote partitions of r tuples Each tuple t r r is put in partition r i where i = h(t r [A]) s 0,s 1,...,s n denotes partitions of s tuples Each tuple t s s is put in partition s i, where i = h(t s [A])

Hash (cont.) Block Indexed Merge- Hash Complex s s r tuples in r i need only to be compared with s tuples in s i Need not be compared with s tuples in any other partition, since: An r tuple and an s tuple that satisfy the join condition will have the same value for the join attributes If that value is hashed to some value i, the r tuple has to be in r i and the s tuple in s i

Hash- Algorithm Block Indexed Merge- Hash Complex s s Compute r s 1 Partition the relation s using hashing function h. One block of memory is reserved as the output buffer for each partition 2 Partition r similarly 3 For each i: 1 Load s i into memory and build an in-memory hash index on it using the join attribute. This hash index uses a different hash function than the earlier one h. 2 Read the tuples in r i from the disk one by one. For each tuple t r locate each matching tuple t s in s i using the in-memory hash index. Output the concatenation of their attributes. Relation s is called the build input Relation r is called the probe input

Cost of Hash- Block Indexed Merge- Hash Complex s s Cost of hash join is: 3(b r + b s ) + 4 n block transfers + 2( b r /b b + b s /b b ) seeks If the entire build input can be kept in main memory no partitioning is required Cost estimate goes down to b r + b s

An Example Block Indexed Merge- Hash Complex s s Compute customer depositor Assume that memory size is 20 blocks b depositor = 100 and b customer = 400 depositor is to be used as build input Partition depositor into five partitions, each of size 20 blocks Similarly, partition customer into five partitions, each of size 80 blocks Total cost: 3 (100 + 400) = 1500 block transfers + 2 ( 100/3 + 400/3 ) = 336 seeks

Complex s with a conjunctive condition: r θ1 θ 2... θ n s Block Indexed Merge- Hash Complex s s Either use nested loops/block nested loops, or Compute the result of one of the simpler joins r θi s final result comprises those tuples in the intermediate result that satisfy the remaining conditions: θ 1... θ i 1 θ i+1... θ n with a disjunctive condition r θ1 θ 2... θ n s Either use nested loops/block nested loops, or Compute as the union of the records in individual joins: (r θ1 s) (r θ2 s) (r θn s)

Outline 1 s 2 3 4 5 6 s 7

Duplicate Elimination s Duplicate elimination can be implemented via hashing or sorting On sorting duplicates will come adjacent to each other, and all but one set of duplicates can be deleted Optimization: duplicates can be deleted during run generation as well as at intermediate merge steps in external sort-merge Hashing is similar duplicates will come into the same bucket.

Aggregation s Aggregation can be implemented in a manner similar to duplicate elimination Sorting or hashing can be used to bring tuples in the same group together, and then the aggregate functions can be applied on each group. Optimization: combine tuples in the same group during run generation and intermediate merges, by computing partial aggregate values For count, min, max, sum: keep aggregate values on tuples found so far in the group. For avg, keep sum and count, and divide sum by count at the end

Set s s Set operations (, and ): can either use variant of merge-join after sorting, or variant of hash-join Example: set operations using hashing Partition both relations using the same hash function For each partition i, using a different hashing function, build an in-memory hash index on r i For r s: Add tuples in s i to the index if they are not in it At end of s i add the tuples in the index to the result For r s: output tuples in s i to the result if they are in the hash index For r s: for each tuple in s i, if it is in the hash index, delete it at end of s i add remaining tuples to the result

Outer s s Outer join can be computed either as a join followed by addition of null-padded non-participating tuples by modifying the join algorithms Modifying merge join to compute r s In r s, non participating tuples are those in r π R (r s) During merging, for every tuple t r from r that does not match any tuple in s, output t r padded with nulls Right outer-join and full outer-join can be computed similarly. Modifying hash join to compute r s If r is probe relation, output non-matching r tuples padded with nulls If r is build relation, when probing keep track of which r tuples matched s tuples. At end of s i output non-matched r tuples padded with nulls

Outline 1 s Materialization Pipelining 2 3 4 5 6 s 7

Evaluation of s Alternatives for evaluating an entire expression tree: Materialization: generate results of an expression whose inputs are relations or are already computed, materialize (store) it on disk. Pipelining: pass on tuples to parent operations even as an operation is being executed Materialization Pipelining

Materialization s Materialization Pipelining Example Materialized evaluation: evaluate one operation at a time, starting at the lowest-level. Use intermediate results materialized into temporary relations to evaluate next-level operations. Compute and store σ balance<2500 (account), then compute the store its join with customer, and finally compute the projections on customer name π customer name σ balance<2005 customer account

Materialization (cont.) s Materialization Pipelining Materialized evaluation is always applicable Cost of writing results to disk and reading them back can be quite high Our cost formulas for operations ignore cost of writing results to disk, so: overall cost = sum of costs of individual operations + cost of writing intermediate results to disk Double buffering: use two output buffers for each operation, when one is full write it to disk while the other is getting filled Allows overlap of disk writes with computation and reduces execution time

Pipelining s Materialization Pipelining Pipelined evaluation: evaluate several operations simultaneously, passing the results of one operation on to the next E.g., in previous expression tree, don t store result of σ balance<2500 (account), but instead, pass tuples directly to the join Similarly, don t store result of join, pass tuples directly to projection Much cheaper than materialization: no need to store a temporary relation to disk. Pipelining may not always be possible e.g., sort, hash-join For pipelining to be effective, use evaluation algorithms that generate output tuples even as tuples are received for inputs to the operation Pipelines can be executed in two ways: demand driven and producer driven

Pipelined Evaluation s Materialization Pipelining In demand driven or lazy evaluation repeatedly requests next tuple from top level operation Each operation requests next tuple from children operations as required, in order to output its next tuple In between calls, operation has to maintain state so it knows what to return next In producer-driven or eager pipelining Operators produce tuples eagerly and pass them up to their parents Buffer maintained between operators, child puts tuples in buffer, parent removes tuples from buffer If buffer is full, child waits till there is space in the buffer, and then generates more tuples schedules operations that have space in output buffer and can process more input tuples Alternative names: pull and push models of pipelining

End of Chapter 13 s Materialization Pipelining