CMSC424: Database Design. Instructor: Amol Deshpande

Similar documents
CMSC424: Database Design. Instructor: Amol Deshpande

Instructor: Amol Deshpande

Find the block in which the tuple should be! If there is free space, insert it! Otherwise, must create overflow pages!

Professor: Pete Keleher! Closures, candidate keys, canonical covers etc! Armstrong axioms!

Instructor: Amol Deshpande

CMSC424: Database Design. Instructor: Amol Deshpande

CMSC 424 Database design Lecture 12 Storage. Mihai Pop

STORING DATA: DISK AND FILES

Intro to DB CHAPTER 12 INDEXING & HASHING

Database Technology. Topic 7: Data Structures for Databases. Olaf Hartig.

File Structures and Indexing

Ch 11: Storage and File Structure

Chapter 12: Indexing and Hashing. Basic Concepts

Storage and File Structure. Classification of Physical Storage Media. Physical Storage Media. Physical Storage Media

CSCI-GA Database Systems Lecture 8: Physical Schema: Storage

Chapter 12: Indexing and Hashing

Chapter 12: Indexing and Hashing

L9: Storage Manager Physical Data Organization

User Perspective. Module III: System Perspective. Module III: Topics Covered. Module III Overview of Storage Structures, QP, and TM

Lecture 15 - Chapter 10 Storage and File Structure

Chapter 11: Indexing and Hashing

Indexing. Jan Chomicki University at Buffalo. Jan Chomicki () Indexing 1 / 25

CSIT5300: Advanced Database Systems

Outlines. Chapter 2 Storage Structure. Structure of a DBMS (with some simplification) Structure of a DBMS (with some simplification)

Physical Database Design: Outline

Disks, Memories & Buffer Management

Storage and File Structure

Database System Concepts, 5th Ed. Silberschatz, Korth and Sudarshan See for conditions on re-use

Storing Data: Disks and Files. Storing and Retrieving Data. Why Not Store Everything in Main Memory? Database Management Systems need to:

Announcements. Reading Material. Recap. Today 9/17/17. Storage (contd. from Lecture 6)

Storing and Retrieving Data. Storing Data: Disks and Files. Solution 1: Techniques for making disks faster. Disks. Why Not Store Everything in Tapes?

Why Is This Important? Overview of Storage and Indexing. Components of a Disk. Data on External Storage. Accessing a Disk Page. Records on a Disk Page

Storing Data: Disks and Files. Storing and Retrieving Data. Why Not Store Everything in Main Memory? Chapter 7

Some Practice Problems on Hardware, File Organization and Indexing

Storing and Retrieving Data. Storing Data: Disks and Files. Solution 1: Techniques for making disks faster. Disks. Why Not Store Everything in Tapes?

Indexing. Week 14, Spring Edited by M. Naci Akkøk, , Contains slides from 8-9. April 2002 by Hector Garcia-Molina, Vera Goebel

Database System Concepts, 6 th Ed. Silberschatz, Korth and Sudarshan See for conditions on re-use

Disks and Files. Storage Structures Introduction Chapter 8 (3 rd edition) Why Not Store Everything in Main Memory?

Database Systems. November 2, 2011 Lecture #7. topobo (mit)

Chapter 10: Storage and File Structure

B-Tree. CS127 TAs. ** the best data structure ever

Chapter 11: Indexing and Hashing

Storage. CS 3410 Computer System Organization & Programming

Overview of Storage & Indexing (i)

Chapter 11: Indexing and Hashing

Physical Disk Structure. Physical Data Organization and Indexing. Pages and Blocks. Access Path. I/O Time to Access a Page. Disks.

COS 318: Operating Systems. Storage Devices. Vivek Pai Computer Science Department Princeton University

Chapter 11: Storage and File Structure. Silberschatz, Korth and Sudarshan Updated by Bird and Tanin

CS 525: Advanced Database Organization 04: Indexing

Advanced Database Systems

Storing Data: Disks and Files

Principles of Data Management. Lecture #2 (Storing Data: Disks and Files)

Storage hierarchy. Textbook: chapters 11, 12, and 13

The Memory Hierarchy 10/25/16

Contents. Memory System Overview Cache Memory. Internal Memory. Virtual Memory. Memory Hierarchy. Registers In CPU Internal or Main memory

CPSC 421 Database Management Systems. Lecture 11: Storage and File Organization

Disks and Files. Jim Gray s Storage Latency Analogy: How Far Away is the Data? Components of a Disk. Disks

Today: Secondary Storage! Typical Disk Parameters!

CSE 562 Database Systems

Classifying Physical Storage Media. Chapter 11: Storage and File Structure. Storage Hierarchy (Cont.) Storage Hierarchy. Magnetic Hard Disk Mechanism

Classifying Physical Storage Media. Chapter 11: Storage and File Structure. Storage Hierarchy. Storage Hierarchy (Cont.) Speed

CS-245 Database System Principles

I/O, Disks, and RAID Yi Shi Fall Xi an Jiaotong University

UNIT III DATA STORAGE AND QUERY PROCESSING

Administração e Optimização Bases de Dados DEI-IST 2010/2011

Kathleen Durant PhD Northeastern University CS Indexes

Storing Data: Disks and Files

Chapter 12: Query Processing. Chapter 12: Query Processing

DATABASE PERFORMANCE AND INDEXES. CS121: Relational Databases Fall 2017 Lecture 11

Indexing and Hashing

CMSC 424 Database design Lecture 13 Storage: Files. Mihai Pop

Lecture Data layout on disk. How to store relations (tables) in disk blocks. By Marina Barsky Winter 2016, University of Toronto

Storing Data: Disks and Files

Disks and RAID. CS 4410 Operating Systems. [R. Agarwal, L. Alvisi, A. Bracy, E. Sirer, R. Van Renesse]

Chapter 12: Indexing and Hashing (Cnt(

CMSC424: Database Design. Instructor: Amol Deshpande

COS 318: Operating Systems. Storage Devices. Kai Li Computer Science Department Princeton University

Mass-Storage Structure

Storing Data: Disks and Files

System Structure Revisited

Material You Need to Know

Introduction to Data Management. Lecture 14 (Storage and Indexing)

Chapter 4 File Systems. Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved

PS2 out today. Lab 2 out today. Lab 1 due today - how was it?

CISC 7310X. C11: Mass Storage. Hui Chen Department of Computer & Information Science CUNY Brooklyn College. 4/19/2018 CUNY Brooklyn College

BBM371- Data Management. Lecture 2: Storage Devices

CS122A: Introduction to Data Management. Lecture #14: Indexing. Instructor: Chen Li

RAID in Practice, Overview of Indexing

Database Systems II. Secondary Storage

CSE 451: Operating Systems Spring Module 12 Secondary Storage

Storing Data: Disks and Files

Tree-Structured Indexes

Parser. Select R.text from Report R, Weather W where W.image.rain() and W.city = R.city and W.date = R.date and R.text.

CSCI-UA.0201 Computer Systems Organization Memory Hierarchy

The physical database. Contents - physical database design DATABASE DESIGN I - 1DL300. Introduction to Physical Database Design

CS143: Disks and Files

Chapter 10 Storage and File Structure

Chapter 12: Query Processing

Mass Storage. 2. What are the difference between Primary storage and secondary storage devices? Primary Storage is Devices. Secondary Storage devices

Chapter 11: Indexing and Hashing

Transcription:

CMSC424: Database Design Instructor: Amol Deshpande amol@cs.umd.edu

Databases Data Models Conceptual representa1on of the data Data Retrieval How to ask ques1ons of the database How to answer those ques1ons Data Storage How/where to store data, how to access it Data Integrity Manage crashes, concurrency Manage seman1c inconsistencies

Query Processing/Storage user query page requests results Query Processing Engine pointers to pages Given a input user query, decide how to execute it Specify sequence of pages to be brought in memory Operate upon the tuples to produce results Buffer Management Bringing pages from disk to memory Managing the limited memory block requests data Space Management on Persistent Storage (e.g., Disks) Storage hierarchy How are relations mapped to files? How are tuples mapped to disk blocks?

Outline Storage hierarchy Disks RAID File Organization Etc.

Storage Hierarchy Tradeoffs between speed and cost of access Volatile vs nonvolatile Volatile: Loses contents when power switched off Sequential vs random access Sequential: read the data contiguously select * from employee Random: read the data from anywhere at any time select * from employee where name like a b Why care? Need to know how data is stored in order to optimize, to understand what s going on

Storage Hierarchy SSD

Storage Hierarchy: Cache Cache Super fast; volatile; Typically on chip L1 vs L2 vs L3 caches??? L1 about 64KB or so; L2 about 1MB; L3 8MB (on chip) to 256MB (off chip) Huge L3 caches available now-a-days Becoming more and more important to care about this Cache misses are expensive Similar tradeoffs as were seen between main memory and disks Cache-coherency??

Storage Hierarchy: Cache K8 core in the AMD Athlon 64 CPU

Storage Hierarchy Main memory 10s or 100s of ns; volatile Pretty cheap and dropping: 1GByte < 100$ Main memory databases feasible now-a-days Flash memory (EEPROM) Limited number of write/erase cycles Non-volatile, slower than main memory (especially writes) Examples? Question How does what we discuss next change if we use flash memory only? Key issue: Random access as cheap as sequential access

Storage Hierarchy Magnetic Disk (Hard Drive) Non-volatile Sequential access much much faster than random access Discuss in more detail later Optical Storage - CDs/DVDs; Jukeboxes Used more as backups Why? Very slow to write (if possible at all) Tape storage Backups; super-cheap; painful to access IBM just released a secure tape drive storage solution

Storage Primary e.g. Main memory, cache; typically volatile, fast Secondary e.g. Disks; Solid State Drives (SSD); non-volatile Tertiary e.g. Tapes; Non-volatile, super cheap, slow

Outline Storage hierarchy Disks RAID File Organization Etc.

1956 IBM RAMAC 24 platters 100,000 characters each 5 million characters

1979 SEAGATE 5MB 1998 SEAGATE 47GB 2006 Western Digital 500GB Weight (max. g): 600g

Latest: Single hard drive: Seagate Barracuda 7200.10 SATA 750 GB 7200 rpm weight: 720g Uses perpendicular recording Microdrives IBM 1 GB Toshiba 80GB

Typical Values Diameter: 1 inch 15 inches Cylinders: 100 2000 Surfaces: 1 or 2 (Tracks/cyl) 2 (floppies) 30 Sector Size: 512B 50K Capacity 360 KB to 2TB (as of Feb 2010) Rotations per minute (rpm) 5400 to 15000

Accessing Data Accessing a sector Time to seek to the track (seek time) average 4 to 10ms + Waiting for the sector to get under the head (rotational latency) average 4 to 11ms + Time to transfer the data (transfer time) very low About 10ms per access So if randomly accessed blocks, can only do 100 block transfers 100 x 512bytes = 50 KB/s Data transfer rates Rate at which data can be transferred (w/o any seeks) 30-50MB/s to up to 200MB/s (Compare to above) Seeks are bad!

Seagate Barracuda: 1TB Heads 8, Disks 4 Bytes per sector: 512 bytes Default cylinders: 16,383 Defaults sectors per track: 63 Defaults read/write heads: 16 Spindle speed: 7200 rpm Internal data transfer rate: 1287 Mbits/sec max Average latency: 4.16msec Track-to-track seek time: 1msec-1.2msec Average seek: 8.5-9.5msec We also care a lot about power now-a-days Why?

Reliability Mean time to/between failure (MTTF/MTBF): 57 to 136 years Consider: 1000 new disks 1,200,000 hours of MTTF each On average, one will fail 1200 hours = 50 days!

Disk Controller Interface between the disk and the CPU Accepts the commands checksums to verify correctness Remaps bad sectors

Optimizing block accesses Typically sectors too small Block: A contiguous sequence of sectors 512 bytes to several Kbytes All data transfers done in units of blocks Scheduling of block access requests? Considerations: performance and fairness Elevator algorithm

Solid State Drives Essentially flash that emulates hard disk interfaces No seeks Much better random reads performance Writes are slower, the number of writes at the same location limited Must write an entire block at a time About a factor of 10 more expensive right now Will soon lead to perhaps the most radical hardware configuration change in a while

Outline Storage hierarchy Disks RAID File Organization Etc.

RAID Redundant array of independent disks Goal: Disks are very cheap Failures are very costly Use extra disks to ensure reliability If one disk goes down, the data still survives Also allows faster access to data Many raid levels Different reliability and performance properties

RAID Levels (a) No redundancy. (b) Make a copy of the disks. If one disk goes down, we have a copy. Reads: Can go to either disk, so higher data rate possible. Writes: Need to write to both disks.

RAID Levels (c) Memory-style Error Correcting Keep extra bits around so we can reconstruct. Superceeded by below. (d) One disk contains parity for the main data disks. Can handle a single disk failure. Little overhead (only 25% in the above case).

RAID Level 5 Distributed parity blocks instead of bits Subsumes Level 4 Normal operation: Read directly from the disk. Uses all 5 disks Write : Need to read and update the parity block To update 9 to 9 read 9 and P2 compute P2 = P2 xor 9 xor 9 write 9 and P2

RAID Level 5 Failure operation (disk 3 has failed) Read block 0 : Read it directly from disk 2 Read block 1 (which is on disk 3) Read P0, 0, 2, 3 and compute 1 = P0 xor 0 xor 2 xor 3 Write : To update 9 to 9 read 9 and P2 Oh P2 is on disk 3 Write 9 So no need to update it

Choosing a RAID level Main choice between RAID 1 and RAID 5 Level 1 better write performance than level 5 Level 5: 2 block reads and 2 block writes to write a single block Level 1: only requires 2 block writes Level 1 preferred for high update environments such as log disks Level 5 lower storage cost Level 1 60% more disks Level 5 is preferred for applications with low update rate, and large amounts of data

Outline Storage hierarchy Disks RAID Buffer Manager File Organization Indexes

Query Processing/Storage user query page requests results Query Processing Engine pointers to pages Given a input user query, decide how to execute it Specify sequence of pages to be brought in memory Operate upon the tuples to produce results Buffer Management Bringing pages from disk to memory Managing the limited memory block requests data Space Management on Persistent Storage (e.g., Disks) Storage hierarchy How are relations mapped to files? How are tuples mapped to disk blocks?

Buffer Manager When the QP wants a block, it asks the buffer manager The block must be in memory to operate upon Buffer manager: If block already in memory: return a pointer to it If not: Evict a current page Either write it to temporary storage, or write it back to its original location, or just throw it away (if it was read from disk, and not modified) and make a request to the storage subsystem to fetch it

Buffer Manager Page Requests from Higher Levels BUFFER POOL disk page free frame MAIN MEMORY DISK DB choice of frame dictated by replacement policy

Buffer Manager Similar to virtual memory manager Buffer replacement policies What page to evict? LRU: Least Recently Used Throw out the page that was not used in a long time MRU: Most Recently Used The opposite Why? Clock? An efficient implementation of LRU

Buffer Manager Pinning a block Not allowed to write back to the disk Force-output (force-write) Force the contents of a block to be written to disk Order the writes This block must be written to disk before this block Critical for fault tolerant guarantees Otherwise the database has no control over whats on disk and whats not on disk

Outline Storage hierarchy Disks RAID Buffer Manager File Organization Etc.

File Organization How are the relations mapped to the disk blocks? Use a standard file system? High-end systems have their own OS/file systems OS interferes more than helps in many cases Mapping of relations to file? One-to-one? Advantages in storing multiple relations clustered together A file is essentially a collection of disk blocks How are the tuples mapped to the disk blocks? How are they stored within each block

File Organization Goals: Allow insertion/deletions of tuples/records Fetch a particular record (specified by record id) Find all tuples that match a condition (say SSN = 123)? Simplest case Each relation is mapped to a file A file contains a sequence of records Each record corresponds to a logical tuple Next: How are tuples/records stored within a block?

Fixed Length Records n = number of bytes per record Store record i at position: n * (i 1) Records may cross blocks Not desirable Stagger so that that doesn t happen Inserting a tuple? Depends on the policy used One option: Simply append at the end of the record Deletions? Option 1: Rearrange Option 2: Keep a free list and use for next insert

Fixed Length Records Deleting: using free lists

Variable-length Records Slotted page structure Indirection: The records may move inside the page, but the outside world is oblivious to it Why? The headers are used as a indirection mechanism Record ID 1000 is the 5th entry in the page number X

File Organization Which block of a file should a record go to? Anywhere? How to search for SSN = 123? Called heap organization Sorted by SSN? Called sequential organization Keeping it sorted would be painful How would you search? Based on a hash key Called hashing organization Store the record with SSN = x in the block number x%1000 Why?

Sequential File Organization Keep sorted by some search key Insertion Find the block in which the tuple should be If there is free space, insert it Otherwise, must create overflow pages Deletions Delete and keep the free space Databases tend to be insert heavy, so free space gets used fast Can become fragmented Must reorganize once in a while

Sequential File Organization What if I want to find a particular record by value? Account info for SSN = 123 Binary search Takes log(n) number of disk accesses Random accesses Too much n = 1,000,000,000 -- log(n) = 30 Recall each random access approx 10 ms 300 ms to find just one account information < 4 requests satisfied per second

Outline Storage hierarchy Disks RAID Buffer Manager File Organization Indexes Etc

Index A data structure for efficient search through large databaess Two key ideas: The records are mapped to the disk blocks in specific ways Sorted, or hash-based Auxiliary data structures are maintained that allow quick search Think library index/catalogue Search key: Attribute or set of attributes used to look up records E.g. SSN for a persons table Two types of indexes Ordered indexes Hash-based indexes

Ordered Indexes Primary index The relation is sorted on the search key of the index Secondary index It is not Can have only one primary index on a relation Index Relation

Primary Sparse Index Every key doesn t have to appear in the index Allows for very small indexes Better chance of fitting in memory Tradeoff: Must access the relation file even if the record is not present

Secondary Index Relation sorted on branch But we want an index on balance Must be dense Every search key must appear in the index

Multi-level Indexes What if the index itself is too big for memory? Relation size = n = 1,000,000,000 Block size = 100 tuples per block So, number of pages = 10,000,000 Keeping one entry per page takes too much space Solution Build an index on the index itself

Multi-level Indexes How do you search through a multi-level index? What about keeping the index up-to-date? Tuple insertions and deletions This is a static structure Need overflow pages to deal with insertions Works well if no inserts/deletes Not so good when inserts and deletes are common

Outline Storage hierarchy Disks RAID Buffer Manager File Organization Indexes B+-Tree Indexes Etc..

Example B+-Tree Index Index

B + -Tree Node Structure Typical node K i are the search-key values P i are pointers to children (for non-leaf nodes) or pointers to records or buckets of records (for leaf nodes). The search-keys in a node are ordered K 1 < K 2 < K 3 <... < K n 1

Properties of B+-Trees It is balanced Every path from the root to a leaf is same length Leaf nodes (at the bottom) P1 contains the pointers to tuple(s) with key K1 Pn is a pointer to the next leaf node Must contain at least n/2 entries

Example B+-Tree Index Index

Properties Interior nodes All tuples in the subtree pointed to by P1, have search key < K1 To find a tuple with key K1 < K1, follow P1 Finally, search keys in the tuples contained in the subtree pointed to by Pn, are all larger than Kn-1 Must contain at least n/2 entries (unless root)

Example B+-Tree Index Index

B+-Trees - Searching How to search? Follow the pointers Logarithmic log B/2 (N), where B = Number of entries per block B is also called the order of the B+-Tree Index Typically 100 or so If a relation contains1,000,000,000 entries, takes only 4 random accesses The top levels are typically in memory So only requires 1 or 2 random accesses per request

Tuple Insertion Find the leaf node where the search key should go If already present Insert record in the file. Update the bucket if necessary This would be needed for secondary indexes If not present Insert the record in the file Adjust the index Add a new (Ki, Pi) pair to the leaf node Recall the keys in the nodes are sorted What if there is no space?

Tuple Insertion Splitting a node Node has too many key-pointer pairs Needs to store n, only has space for n-1 Split the node into two nodes Put about half in each Recursively go up the tree May result in splitting all the way to the root In fact, may end up adding a level to the tree Pseudocode in the book!!

B + -Trees: Insertion B + -Tree before and after insertion of Clearview "

Updates on B + -Trees: Deletion Find the record, delete it. Remove the corresponding (search-key, pointer) pair from a leaf node Note that there might be another tuple with the same search-key In that case, this is not needed Issue: The leaf node now may contain too few entries Why do we care? Solution: 1. See if you can borrow some entries from a sibling 2. If all the siblings are also just barely full, then merge (opposite of split) May end up merging all the way to the root In fact, may reduce the height of the tree by one

Examples of B + -Tree Deletion Before and after deleting Downtown " Deleting Downtown causes merging of under-full leaves leaf node can become empty only for n=3!

Examples of B + -Tree Deletion Deletion of Perryridge from result of previous example"

Example of B + -tree Deletion Before and after deletion of Perryridge from earlier example"

Another B+Tree Insertion Example INITIAL TREE Next slides show the insertion of (125) into this tree According to the Algorithm in Figure 12.13, Page 495

Another Example: INSERT (125) Step 1: Split L to create L Insert the lowest value in L (130) upward into the parent P

Another Example: INSERT (125) Step 2: Insert (130) into P by creating a temp node T

New P has only 1 key, but two pointers so it is OKAY. This follows the last 4 lines of Figure 12.13 (note that n = 4) K = 130. Insert upward into the root Another Example: INSERT (125) Step 3: Create P ; distribute from T into P and P

Once again following the insert_in_parent() procedure, K = 1000 Another Example: INSERT (125) Step 4: Insert (130) into the parent (R); create R

Another Example: INSERT (125) Step 5: Create a new root

B+ Trees in Practice Typical order: 100. Typical fill-factor: 67%. average fanout = 133 Typical capacities: Height 3: 133 3 = 2,352,637 entries Height 4: 133 4 = 312,900,700 entries Can often hold top levels in buffer pool: Level 1 = 1 page = 8 Kbytes Level 2 = 133 pages = 1 Mbyte Level 3 = 17,689 pages = 133 MBytes

B+ Trees: Summary Searching: log d (n) Where d is the order, and n is the number of entries Insertion: Find the leaf to insert into If full, split the node, and adjust index accordingly Similar cost as searching Deletion Find the leaf node Delete May not remain half-full; must adjust the index accordingly

More Primary vs Secondary Indexes More B+-Trees Hash-based Indexes Static Hashing Extendible Hashing Linear Hashing Grid-files R-Trees etc

Secondary Index If relation not sorted by search key, called a secondary index Not all tuples with the same search key will be together Searching is more expensive

B+-Tree File Organization Store the records at the leaves Sorted order etc..

B-Tree Predates Different treatment of search keys Less storage Significantly harder to implement Not used.

Hash-based File Organization Store record with search key k in block number h(k) e.g. for a person file, h(ssn) = SSN % 4 Blocks called buckets (1000, A, ) (200, B, ) (4044, C, ) (401, Ax, ) (21, Bx, ) Block 0 Block 1 Buckets What if the block becomes full? Overflow pages (1002, Ay, ) (10, By, ) Block 2 Uniformity property: Don t want all tuples to map to the same bucket h(ssn) = SSN % 2 would be bad (1003, Az, ) (35, Bz, ) Block 3

Hash-based File Organization Hashed on branch-name Hash function: a = 1, b = 2,.., z = 26 h(abz) = (1 + 2 + 26) % 10 = 9

Hash Indexes Extends the basic idea Search: Find the block with search key Follow the pointer Range search? a < X < b?

Hash Indexes Very fast search on equality Can t search for ranges at all Must scan the file Inserts/Deletes Overflow pages can degrade the performance Two approaches Dynamic hashing Extendible hashing

Grid Files Multidimensional index structure Can handle: X = x1 and Y = y1 a < X < b and c < Y < d Stores pointers to tuples with : branch-name between Mianus and Perryridge and balance < 1k

R-Trees For spatial data (e.g. maps, rectangles, GPS data etc)

Conclusions Indexing Goal: Quickly find the tuples that match certain conditions Equality and range queries most common Hence B+-Trees the predominant structure for on-disk representation Hashing is used more commonly for in-memory operations Many many more types of indexing structures exist For different types of data For different types of queries E.g. nearest-neighbor queries