Main Memory and the CPU Cache

Similar documents
CSE 530A. B+ Trees. Washington University Fall 2013

Module 4: Index Structures Lecture 13: Index structure. The Lecture Contains: Index structure. Binary search tree (BST) B-tree. B+-tree.

Background: disk access vs. main memory access (1/2)

Tree-Structured Indexes

CS350: Data Structures B-Trees

Multi-way Search Trees. (Multi-way Search Trees) Data Structures and Programming Spring / 25

Introduction to Indexing 2. Acknowledgements: Eamonn Keogh and Chotirat Ann Ratanamahatana

amiri advanced databases '05

CS 350 : Data Structures B-Trees

Extra: B+ Trees. Motivations. Differences between BST and B+ 10/27/2017. CS1: Java Programming Colorado State University

THE B+ TREE INDEX. CS 564- Spring ACKs: Jignesh Patel, AnHai Doan

Chapter 12: Indexing and Hashing. Basic Concepts

Balanced Search Trees

2-3 and Trees. COL 106 Shweta Agrawal, Amit Kumar, Dr. Ilyas Cicekli

Tree-Structured Indexes

Chapter 12: Indexing and Hashing

Introduction. Choice orthogonal to indexing technique used to locate entries K.

Tree-Structured Indexes

Spring 2017 B-TREES (LOOSELY BASED ON THE COW BOOK: CH. 10) 1/29/17 CS 564: Database Management Systems, Jignesh M. Patel 1

C SCI 335 Software Analysis & Design III Lecture Notes Prof. Stewart Weiss Chapter 4: B Trees

Tree-Structured Indexes (Brass Tacks)

Chapter 11: Indexing and Hashing

Physical Level of Databases: B+-Trees

Tree-Structured Indexes. Chapter 10

Tree-Structured Indexes

Tree-Structured Indexes

Information Systems (Informationssysteme)

CSIT5300: Advanced Database Systems

Lecture 13. Lecture 13: B+ Tree

Tree-Structured Indexes ISAM. Range Searches. Comments on ISAM. Example ISAM Tree. Introduction. As for any index, 3 alternatives for data entries k*:

CSE332: Data Abstractions Lecture 7: B Trees. James Fogarty Winter 2012

Trees. Courtesy to Goodrich, Tamassia and Olga Veksler

Data Structures and Algorithms

Chapter 11: Indexing and Hashing" Chapter 11: Indexing and Hashing"

Tree-Structured Indexes

Indexing. Chapter 8, 10, 11. Database Management Systems 3ed, R. Ramakrishnan and J. Gehrke 1

CSE 373 OCTOBER 25 TH B-TREES

B-Trees. Version of October 2, B-Trees Version of October 2, / 22

Introduction to Data Management. Lecture 15 (More About Indexing)

CS127: B-Trees. B-Trees

CSE 326: Data Structures B-Trees and B+ Trees

Chapter 12: Indexing and Hashing (Cnt(

CS 310 B-trees, Page 1. Motives. Large-scale databases are stored in disks/hard drives.

Database System Concepts, 6 th Ed. Silberschatz, Korth and Sudarshan See for conditions on re-use

Principles of Data Management. Lecture #5 (Tree-Based Index Structures)

Tree-Structured Indexes

(2,4) Trees. 2/22/2006 (2,4) Trees 1

File Systems: Fundamentals

Indexing: B + -Tree. CS 377: Database Systems

Augmenting Data Structures

Multi-way Search Trees! M-Way Search! M-Way Search Trees Representation!

Intro to DB CHAPTER 12 INDEXING & HASHING

Motivation for B-Trees

I think that I shall never see A billboard lovely as a tree. Perhaps unless the billboards fall I ll never see a tree at all.

9/29/2016. Chapter 4 Trees. Introduction. Terminology. Terminology. Terminology. Terminology

B-Trees & its Variants

Algorithms. Deleting from Red-Black Trees B-Trees

Announcements. Reading Material. Recap. Today 9/17/17. Storage (contd. from Lecture 6)

Tree-Structured Indexes. A Note of Caution. Range Searches ISAM. Example ISAM Tree. Introduction

Chapter 11: Indexing and Hashing

Indexing and Hashing

Lecture 4. ISAM and B + -trees. Database Systems. Tree-Structured Indexing. Binary Search ISAM. B + -trees

Trees. Reading: Weiss, Chapter 4. Cpt S 223, Fall 2007 Copyright: Washington State University

B-Trees. Introduction. Definitions

Chapter 12: Indexing and Hashing

Algorithm Performance Factors. Memory Performance of Algorithms. Processor-Memory Performance Gap. Moore s Law. Program Model of Memory I

(2,4) Trees Goodrich, Tamassia (2,4) Trees 1

Multiway searching. In the worst case of searching a complete binary search tree, we can make log(n) page faults Everyone knows what a page fault is?

Introduction. for large input, even access time may be prohibitive we need data structures that exhibit times closer to O(log N) binary search tree

Introduction to Indexing R-trees. Hong Kong University of Science and Technology

File Systems: Fundamentals

FILE SYSTEMS. CS124 Operating Systems Winter , Lecture 23

Database System Concepts, 5th Ed. Silberschatz, Korth and Sudarshan See for conditions on re-use

Indexing. Jan Chomicki University at Buffalo. Jan Chomicki () Indexing 1 / 25

Chapter 11: Indexing and Hashing

(2,4) Trees Goodrich, Tamassia. (2,4) Trees 1

CS F-11 B-Trees 1

Goals for Today. CS 133: Databases. Example: Indexes. I/O Operation Cost. Reason about tradeoffs between clustered vs. unclustered tree indexes

Balanced Trees Part One

Chapter 11: Indexing and Hashing

Virtual to physical address translation

2-3 Tree. Outline B-TREE. catch(...){ printf( "Assignment::SolveProblem() AAAA!"); } ADD SLIDES ON DISJOINT SETS

Database index structures

Advanced Set Representation Methods

Search Trees. Computer Science S-111 Harvard University David G. Sullivan, Ph.D. Binary Search Trees

Administrivia. Tree-Structured Indexes. Review. Today: B-Tree Indexes. A Note of Caution. Introduction

CMSC424: Database Design. Instructor: Amol Deshpande

Comp 335 File Structures. B - Trees

I/O Model. Cache-Oblivious Algorithms : Algorithms in the Real World. Advantages of Cache-Oblivious Algorithms 4/9/13

INF2220: algorithms and data structures Series 1

Algorithm Performance Factors. Memory Performance of Algorithms. Processor-Memory Performance Gap. Moore s Law. Program Model of Memory II

CS3600 SYSTEMS AND NETWORKS

Multi-way Search Trees

Trees can be used to store entire records from a database, serving as an in-memory representation of the collection of records in a file.

Trees. (Trees) Data Structures and Programming Spring / 28

Suppose you are accessing elements of an array: ... or suppose you are dereferencing pointers: temp->next->next = elem->prev->prev;

Uses for Trees About Trees Binary Trees. Trees. Seth Long. January 31, 2010

B-Trees. Disk Storage. What is a multiway tree? What is a B-tree? Why B-trees? Insertion in a B-tree. Deletion in a B-tree

Sorted Arrays. Operation Access Search Selection Predecessor Successor Output (print) Insert Delete Extract-Min

Computer Memory. Data Structures and Algorithms CSE 373 SP 18 - KASEY CHAMPION 1

Transcription:

Main Memory and the CPU Cache

CPU cache Unrolled linked lists B Trees

Our model of main memory and the cost of CPU operations has been intentionally simplistic The major focus has been on determining the broad growth rate of running times of algorithms i.e. O notation Program running time is determined by Algorithm Hardware speed Programming quality

CPU Operations e.g. i < j costs 1 data access e.g. x = y I/O to disk, network, user Memory (RAM) costs 1 Simple Cost Model Reality time to access an int in RAM is more than 200 times time to compare two ints in registers

CPU storage consists of Registers CPU cache Data to be used must be in a register If it is not in a register Request it from memory To request data from memory First check the CPU cache If it is in the cache (a cache hit) takes 2 20 cycles Registers Cache DRAM A cycle is the time of a CPU operation If it is not in the cache (a miss) takes approximately 200 cycles

Sections of the CPU cache are referred to as cache lines The typical cache line size is 64 bytes When data is read from DRAM an entire block of data, the cache line, is copied into the cache So if a 4 byte integer was to be read an entire 64 byte block is copied into the cache This is fast as bandwidth is high Data remains in a cache until it is replaced By the cache replacement algorithm Implication Data related to a recent request may be copied to the cache if it was physically close to the requested data

Consider two loops over an array int* arr = new int[1024 * 1024]; Loop 1: for(int i=0; i < n; i++) arr[i] *=3; Loop 2: for(int i=0; i < n; i+=16) arr[i] *=3; Loop 1 performs sixteen times as much work as loop 2 They are both O(n) Loop 1 does not take 16 times as long to process Actual times vary but Loop 1 and loop have the same number of RAM accesses i.e. the same number of cache misses

Consider two different implementations of a list of integers Case 1: the list is stored in an array, arr Elements of arr are contiguous to their neighbours Case 2: the list is stored in a linked list, ls Elements of ls are fragmented they are physically separate from each other in main memory arr ls

Assume the following costs 4 cycles to read from a cache line 200 cycles to read from DRAM Time to read from the array 16 elements are read into a cache line at a time The total number of cycles for each 16 elements 200 (read into cache) + 64 (read from cache) = 264 Approximately 16 cycles per element

The list is fragmented over main memory List elements are not contiguous The probability that related elements are read into the same cache line is small Therefore every read takes 200 cycles 12.5 times slower than the array In practice arrays may be even faster than lists The OS may use a pre-fetching algorithm Where it predicts which area of main memory will be read next and fetches it before it is requested

Cache aware data structures and algorithms are designed to account for cache effects Run time is improved by reducing the number of cache misses We will look at two examples Unrolled linked lists B Trees The run-time of cache oblivious algorithms applies is unaffected by cache structure

Linked lists are dynamic data structures Adding a new element when the list is full is very fast O(1) In contrast to adding a new element to an array that is full O(n) But traditional linked lists are cache oblivious Since elements may be fragmented across main memory Particularly when the list is constructed node by node over time Solution make a list of arrays

List nodes contain A partially filled array of elements A pointer to the next node Each node has a size and capacity The size is the current number of elements The capacity is maximum number of elements

To traverse an unrolled linked list Traverse the linked list and at each node traverse the occupied part of the array There are frequently cache misses visiting a new node But few or zero cache misses for visiting each element of a single node s arrat Each array is at least half full Maintained by the insertion and removal algorithms

Assume the following 5 cycles for a cache hit 200 cycles for a cache miss Traversal time in CPU instructions = 5n + 200 * cache misses Item size is 4 bytes We will compare three structures Array Simple linked list Unrolled linked list

The number of cache misses for an array is 4n / 64 = n / 16 i.e. the number of times data has to be read into a cache line Ignores the effects of pre-fetching Traversal time in CPU instructions 5n + 200n / 16 17.5n

The number of cache misses for a simple linked list is the number of nodes n Traversal time in CPU instructions 5n + 200n = 205n

The number of cache misses for an unrolled linked list is affected by two factors Occupancy of the arrays, assume ½ full Additional overhead per node, assume 8 byte pointer 4 byte size variable 4 byte metadata Cache misses = (2n/16) * ((4*16 + 16) / 64) = 5n / 32 Traversal time in CPU instructions 5n + 200n / 7 34n

Traversal time in CPU instructions Array: 17.5n Unrolled linked list 34n Simple linked list 205n All are O(n) with different leading constants All calculations are both approximate and simplistic and assume the worst case But do indicate actual performance

Registers L1 Cache L2 Cache Latency (instructions) transfers of 8 bytes 1 10 50 10s of bytes 10s of kbs transfers in chunks of cache line size MBs transfers in chunks of cache line size Main Memory 200 GBs transfers in chunks of disk block size Disk Storage 5,000 TBs

External indexes are optimized to minimize disk reads Since reading disk from a hard disk is very slow Many times slower than reading data from DRAM into a register Known as B trees They can be adapted to work with cache lines B trees have two desirable properties They are multiple level indexes that maintain as many levels as are required for the file being indexed Space on tree blocks is managed so that each block is at least ½ full

B trees are balanced structures All paths from the root to a leaf have the same length Most B trees have a small number of levels Any number of levels is possible B trees are similar to binary search trees Except that B tree nodes have more than two children That is, they have greater fan-out In an exterior structure a single B tree node maps to a single disk block As a main memory structure nodes map to cache lines

We will look at a variant of B trees called B+ trees The number of data entries in a node is determined by the size of the search key Up to n search key values and n + 1 pointers Choose n to be as large as possible while still allowing n search keys and n + 1 pointers to fit in a cache line n is therefore determined by the size of Search key values Pointers Cache lines

In interior nodes, pointers point to next level nodes Label the search keys K 1 to K n, and pointers p 0 to p n Pointer p 0 points to nodes whose search key values are less than K 1 Other pointers, p i, point to nodes with search keys greater than or equal to K i and less than K i+1 An interior node must use at least (n + 1)/2 pointers p 0 K 1 p 1 K 2 p 2 K 3 p 3 12 24 29 X < 12 12 X < 24 24 X < 29 X 29 p i points to subtrees with search key values, X

Leaf nodes contain data or pointers to data The leaf nodes contain the keys in order The left most n pointers point to data objects A leaf node must use at least (n + 1)/2 of these pointers i.e. contain at least (n + 1)/2 key values The right most pointer points to the next leaf p 0 K 1 p 1 K 2 p 2 K 3 p 3 12 24 29 next leaf in the tree object with key 12 object with key 24 object with key 29

17 in this example n = 3 10 note that 27 87 (n+1)/2 = 2 2 5 10 13 15 17 22 27 35 87 91

The B+ tree search algorithm is similar to a binary search tree To search for a value K start at the root and end at a leaf If the node is a leaf and the i th key has the value K then follow the i th pointer to the data If the node is an interior node follow the appropriate pointer to the next (interior or leaf) node Searching a B+ tree visits a number of nodes equal to the number of levels of the tree (height + 1) Each node entails a cache miss

17 which nodes are visited in a search for 22? which nodes are visited in a search for 16? 10 27 87 2 5 10 13 15 17 22 27 35 87 91

B+ trees can be used to retrieve a range of values range queries Assume we wish to retrieve values from x to y Search the tree for the leaf that should contain value x Follow the leaf pointers until a key greater than y is found The tree can also be used to satisfy queries that have no lower bound or no upper bound

Insert the value in the appropriate place in a leaf node of the index Use the search algorithm to find the leaf node Insert value, if it fits the process is complete If the target leaf node is full then split it into two nodes The first (n + 1) / 2 values stay in the original node Create a new node to the right of the original node with the remaining (n + 1) / 2 values Insert the first search key value from the new leaf and a pointer to the leaf in its parent node

Adding a value to an interior node may cause it to split If so, after inserting a new entry there will be n + 1 keys and n + 2 pointers The first (n + 2) / 2 pointers stay in the original node Create a new node with the remaining (n + 2) / 2 pointers to the right of the original node Leave the first n / 2 keys in the original node and move the last n / 2 keys to the new node The remaining key 's value falls between the values in the original and new node This left over key is inserted into the parent of the node along with a pointer to the new interior node

Moving a value to a higher, interior level of the tree, may again cause a split The same process is repeated until no further splits are required, or until a new root node has been created If a new root is created it will initially have just one key and two children So will be less than half full This is permitted for the root (only)

insert 2, 21 and 11 2 11 21 n = 3 the values are maintained in order in the index pages objects Note that leaf nodes may just contain values, rather than pointers to values it depends on what is being stored in the tree So a leaf might have this structure (only values): 2 11 21

insert 8 11 n = 3 create new root with the first value of the new leaf node 2 11 8 21 11 21 create a new node with the last ½ of the values chain the new node to the original node

insert 64, then 5 insert 23... n = 3 11 2 85 8 11 21 64 both leaf nodes are now full...

... inserting 23... 11 23 n = 3 2 5 8 11 21 64 23 64

insert 97 and 6 n = 3 11 23 2 5 8 11 21 23 64 97

insert 6 11 6 23 11 23 n = 3 2 5 8 6 8 11 21 23 64

insert 6 6 11 23 n = 3 2 5 8 6 8 11 21 23 64

insert 19 and 9 6 11 23 n = 3 2 5 8 6 8 9 11 19 21 21 23 64

the same tree... n = 3 6 11 23 2 5 6 8 9 11 19 21 23 64

insert 7 n = 3 and now insert 8 in the parent 6 11 23 which will require that it splits 2 5 6 87 9 8 9 11 19 21 23 64

insert 7 inserting 8 and a pointer in the root node 11 and make the middle value the new root n = 3 keep ½ the pointers and the first n/2 values 6 11 8 23 23 move ½ the pointers and the last n/2 values 2 5 6 7 8 9 11 19 21 23 64

tree after inserting 7 11 n = 3 6 8 23 2 5 6 7 8 9 11 19 21 23 64

insert more values... 11 n = 3 6 8 23 45 60 2 5 6 7 8 9 11 19 21 23 31 39 45 51 60 64 93

and insert 77 11 n = 3 6 8 23 45 60 77 93 2 5 6 7 8 9 11 19 21 23 31 39 45 51 60 64 93

... and insert 77... 11 now insert 60 in the root n = 3 6 8 23 45 60 77 77 93 2 5 6 7 8 9 11 19 21 23 31 39 45 51 60 64

... and insert 77... 11 60 now insert 60 in the root n = 3 6 8 23 45 77 77 93 2 5 6 7 8 9 11 19 21 23 31 39 45 51 60 64

Find the value in the leaf node and delete it This may result in there being too few entries in the node If so select an adjacent sibling of the node and Redistribute values between the two nodes So that both nodes have enough entries If this is not possible Coalesce the two nodes Delete the appropriate value and pointer in the parent node

A value and pointer are removed from an adjacent sibling and inserted in the node with insufficient entries The sibling can be the left or the right sibling, although it makes a slight difference to the process The chosen node must be a sibling to ensure that only a single parent node is affected After redistribution, one of the two nodes will have a different first search key value The corresponding value in the parent node must be changed to this value If the node's sibling(s) have insufficient entries redistribution may not be possible

When redistribution is not possible, two nodes can be combined (aka coalesced) Keep track of the value in the parent between the pointers to the two nodes to be combined Insert all of the values (and pointers) from one node into the other Re-connect links between leaves (if the nodes are leaves) Make a recursive call to the removal process, removing the identified value in the parent node This, in turn, may require non-leaf nodes to be coalesced

The removal algorithm requires a choice to be made between siblings Such a choice has to be implemented in the algorithm Coalescing nodes requires more work It may result in making changes up the tree, but The tree height may be reduced Redistributing nodes requires less work, but does not impact the height of the tree

remove 19 11 60 n = 3 6 8 23 45 77 77 93 2 5 6 7 8 9 11 19 21 21 23 31 39 45 51 60 64 use search to find the value

remove 45 11 60 n = 3 6 8 23 45 77 77 93 2 5 6 7 8 9 11 19 21 23 31 39 45 51 51 60 64 the node is less than half full: (n + 1)/2 pointers to records

remove 45 11 60 n = 3 6 8 23 45 77 77 93 2 5 6 7 8 9 11 19 21 23 31 39 51 60 64 take a value from the left sibling node

remove 45 11 60 change the value in the parent node n = 3 6 8 23 45 39 77 77 93 2 5 6 7 8 9 11 19 21 23 31 39 39 51 51 60 64 take a value from the left sibling node

remove 9 11 60 n = 3 6 8 23 45 77 77 93 2 5 6 7 8 9 11 19 21 23 31 39 45 51 60 64 leaf nodes must have (n + 1)/2 values not a sibling so must coalesce the node with its L sibling

remove 9 11 60 n = 3 now remove entry and pointer from parent 6 8 23 45 77 77 93 2 5 6 7 8 8 9 11 19 21 23 31 39 45 51 60 64 leaf nodes must have (n + 1)/2 values not a sibling so must coalesce the node with its L sibling

remove 9 11 60 n = 3 these nodes have enough (2) pointers now remove entry and pointer from parent 6 8 23 45 77 77 93 2 5 6 7 8 11 19 21 23 31 39 45 51 60 64

remove 6 11 60 n = 3 parent entry does not need to change 6 8 23 45 77 77 93 2 5 67 8 7 8 11 19 21 23 31 39 45 51 60 64

remove 8 11 60 n = 3 now remove entry and pointer from parent 6 8 23 45 77 77 93 2 5 7 7 8 11 19 21 23 31 39 45 51 60 64

remove 8 11 60 which value? n = 3 so take a pointer from sibling just 1 pointer now remove entry and pointer from parent 6 23 45 77 77 93 2 5 7 11 19 21 23 31 39 45 51 60 64

remove 8 23 11 60 which value? n = 3 11 45 23 45 77 77 93 2 5 7 11 19 21 23 31 39 45 51 60 64

remove 8 finished 23 60 n = 3 11 45 77 2 5 7 11 19 21 23 31 39 45 51 60 64 77 93

remove 23 and 31? 23 60 n = 3 11 45 77 2 5 7 11 19 21 23 31 39 45 51 60 64 77 93

Splitting and merging of index blocks is rare Typically the value of n will be greater than 3! Most splits or merges are limited to two leaves and one parent The number of cache misses is based on the tree height If capacity = 16, a B+ tree would contain approximately one billion values Assuming nodes are half full The number of cache misses is equal to log capacity/2 (number of value in tree)