Storing Data: Disks and Files

Similar documents
Storing Data: Disks and Files

Storing and Retrieving Data. Storing Data: Disks and Files. Solution 1: Techniques for making disks faster. Disks. Why Not Store Everything in Tapes?

Storing Data: Disks and Files

Storing Data: Disks and Files. Storing and Retrieving Data. Why Not Store Everything in Main Memory? Chapter 7

Storing Data: Disks and Files. Storing and Retrieving Data. Why Not Store Everything in Main Memory? Database Management Systems need to:

Storing Data: Disks and Files

Storing and Retrieving Data. Storing Data: Disks and Files. Solution 1: Techniques for making disks faster. Disks. Why Not Store Everything in Tapes?

Disks and Files. Storage Structures Introduction Chapter 8 (3 rd edition) Why Not Store Everything in Main Memory?

Review 1-- Storing Data: Disks and Files

Storing Data: Disks and Files. Administrivia (part 2 of 2) Review. Disks, Memory, and Files. Disks and Files. Lecture 3 (R&G Chapter 7)

Disks and Files. Jim Gray s Storage Latency Analogy: How Far Away is the Data? Components of a Disk. Disks

L9: Storage Manager Physical Data Organization

Storing Data: Disks and Files

Database Systems. November 2, 2011 Lecture #7. topobo (mit)

Principles of Data Management. Lecture #2 (Storing Data: Disks and Files)

Disks, Memories & Buffer Management

Storing Data: Disks and Files

Principles of Data Management. Lecture #3 (Managing Files of Records)

Disks & Files. Yanlei Diao UMass Amherst. Slides Courtesy of R. Ramakrishnan and J. Gehrke

CS 405G: Introduction to Database Systems. Storage

Outlines. Chapter 2 Storage Structure. Structure of a DBMS (with some simplification) Structure of a DBMS (with some simplification)

Storing Data: Disks and Files

Database Applications (15-415)

STORING DATA: DISK AND FILES

CS220 Database Systems. File Organization

ECS 165B: Database System Implementa6on Lecture 2

Project is due on March 11, 2003 Final Examination March 18, pm to 10.30pm

ECS 165B: Database System Implementa6on Lecture 3

Managing Storage: Above the Hardware

Database design and implementation CMPSCI 645. Lecture 08: Storage and Indexing

User Perspective. Module III: System Perspective. Module III: Topics Covered. Module III Overview of Storage Structures, QP, and TM

Normalization, Generated Keys, Disks

CPSC 421 Database Management Systems. Lecture 11: Storage and File Organization

Database Management Systems. Buffer and File Management. Fall Queries. Query Optimization and Execution. Relational Operators

Database Applications (15-415)

Unit 2 Buffer Pool Management

RAID in Practice, Overview of Indexing

Announcements. Reading Material. Recap. Today 9/17/17. Storage (contd. from Lecture 6)

CSE 190D Database System Implementation

Parser. Select R.text from Report R, Weather W where W.image.rain() and W.city = R.city and W.date = R.date and R.text.

A memory is what is left when something happens and does not completely unhappen.

Goals for Today. CS 133: Databases. Relational Model. Multi-Relation Queries. Reason about the conceptual evaluation of an SQL query

Unit 2 Buffer Pool Management

CSCI-GA Database Systems Lecture 8: Physical Schema: Storage

Unit 3 Disk Scheduling, Records, Files, Metadata

Data Storage and Query Answering. Data Storage and Disk Structure (2)

Introduction to Data Management. Lecture #13 (Indexing)

Database Systems II. Secondary Storage

CS122A: Introduction to Data Management. Lecture #14: Indexing. Instructor: Chen Li

CSE 232A Graduate Database Systems

Introduction to Data Management. Lecture 14 (Storage and Indexing)

Why Is This Important? Overview of Storage and Indexing. Components of a Disk. Data on External Storage. Accessing a Disk Page. Records on a Disk Page

Storage and File Structure

Oracle on RAID. RAID in Practice, Overview of Indexing. High-end RAID Example, continued. Disks and Files: RAID in practice. Gluing RAIDs together

Roadmap. Handling large amount of data efficiently. Stable storage. Parallel dataflow. External memory algorithms and data structures

Advanced Database Systems

Ch 11: Storage and File Structure

Today: Secondary Storage! Typical Disk Parameters!

Storage and File Structure. Classification of Physical Storage Media. Physical Storage Media. Physical Storage Media

CMSC 424 Database design Lecture 12 Storage. Mihai Pop

File. File System Implementation. Operations. Permissions and Data Layout. Storing and Accessing File Data. Opening a File

Introduction Disks RAID Tertiary storage. Mass Storage. CMSC 420, York College. November 21, 2006

Chapter 1 Disk Storage, Basic File Structures, and Hashing.

Chapter 11: Storage and File Structure. Silberschatz, Korth and Sudarshan Updated by Bird and Tanin

Classifying Physical Storage Media. Chapter 11: Storage and File Structure. Storage Hierarchy (Cont.) Storage Hierarchy. Magnetic Hard Disk Mechanism

Classifying Physical Storage Media. Chapter 11: Storage and File Structure. Storage Hierarchy. Storage Hierarchy (Cont.) Speed

Chapter 13 Disk Storage, Basic File Structures, and Hashing.

DATA STORAGE, RECORD and FILE STUCTURES

Storage Devices for Database Systems

CISC 7310X. C11: Mass Storage. Hui Chen Department of Computer & Information Science CUNY Brooklyn College. 4/19/2018 CUNY Brooklyn College

Virtual Memory. Reading. Sections 5.4, 5.5, 5.6, 5.8, 5.10 (2) Lecture notes from MKP and S. Yalamanchili

I/O, Disks, and RAID Yi Shi Fall Xi an Jiaotong University

Address Accessible Memories. A.R. Hurson Department of Computer Science Missouri University of Science & Technology

COMP283-Lecture 3 Applied Database Management

V. Mass Storage Systems

Chapter 13: Mass-Storage Systems. Disk Scheduling. Disk Scheduling (Cont.) Disk Structure FCFS. Moving-Head Disk Mechanism

Chapter 13: Mass-Storage Systems. Disk Structure

Operating Systems. Operating Systems Professor Sina Meraji U of T

I/O CANNOT BE IGNORED

Definition of RAID Levels

Storage System COSC UCB

Mass-Storage Structure

Chapter 10: Mass-Storage Systems

Chapter 10 Storage and File Structure

Chapter 10: Mass-Storage Systems. Operating System Concepts 9 th Edition

Chapter 11: File System Implementation. Objectives

Database Management Systems, 2nd edition, Raghu Ramakrishnan, Johannes Gehrke, McGraw-Hill

BBM371- Data Management. Lecture 2: Storage Devices

Introduction to I/O and Disk Management

Chapter 12: Mass-Storage

Chapter 10: Mass-Storage Systems

Introduction to I/O and Disk Management

Storage. CS 3410 Computer System Organization & Programming

Advanced Database Systems

Data Storage - I: Memory Hierarchies & Disks. Contains slides from: Naci Akkök, Pål Halvorsen, Hector Garcia-Molina, Ketil Lund, Vera Goebel

Chapter 4 File Systems. Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved

File. File System Implementation. File Metadata. File System Implementation. Direct Memory Access Cont. Hardware background: Direct Memory Access

Lecture 15 - Chapter 10 Storage and File Structure

Data Storage and Disk Structure

Storage and File Structure

Transcription:

Storing Data: Disks and Files Chapter 7 (2 nd edition) Chapter 9 (3 rd edition) Yea, from the table of my memory I ll wipe away all trivial fond records. -- Shakespeare, Hamlet Database Management Systems, R. Ramakrishnan and J. Gehrke 1

Disks and Files DBMS stores information on ( hard ) disks. This has major implications for DBMS design! READ: transfer data from disk to main memory (RAM). WRITE: transfer data from RAM to disk. Both are high-cost operations, relative to in-memory operations, so must be planned carefully! Database Management Systems, R. Ramakrishnan and J. Gehrke 2

Why Not Store Everything in Main Memory? Costs too much. $100 will buy you either 512MB of RAM or 100 GB of disk today. Main memory is volatile. We want data to be saved (persistent) between runs. (Obviously!) Typical storage hierarchy: Cache Main memory (RAM) for currently used data. Disk for the main database (secondary storage). Other storage devices (flash card, usb stick, ) Tapes for archiving older versions of the data (tertiary storage). DVD/CDROM and other devices Database Management Systems, R. Ramakrishnan and J. Gehrke 3

Disks Secondary storage device of choice. Main advantage over tapes: random access vs. sequential. Data is stored and retrieved in units called disk blocks or pages. Unlike RAM, time to retrieve a disk page varies depending upon location on disk. Therefore, relative placement of pages on disk has major impact on DBMS performance! Database Management Systems, R. Ramakrishnan and J. Gehrke 4

Components of a Disk The platters spin (say, 1200rps). The arm assembly is moved in or out to position a head on a desired track. Tracks under heads make a cylinder (imaginary!). Only one head reads/writes at any one time. Block size is a multiple of sector size (which is fixed). Arm assembly Disk head Arm movement Addressing: CHS (cylinder, head, sector) Spindle Tracks Platters Sector Database Management Systems, R. Ramakrishnan and J. Gehrke 5

Accessing a Disk Page Time to access (read/write) a disk block: seek time (moving arms to position disk head on track) rotational delay (waiting for block to rotate under head) transfer time (actually moving data to/from disk surface) Buffer size (2 MB typical, 8 MB, ) Seek time and rotational delay dominate. Seek time varies from about 1 to 20msec Rotational delay varies from 0 to 10msec Transfer rate is about 1 msec per 4KB page Key to lower I/O cost: reduce seek/rotation delays! Hardware vs. software solutions? Database Management Systems, R. Ramakrishnan and J. Gehrke 6

Accessing a Disk Page Time to access (read/write) a disk block: Average seek time -- 9.1 msec Average rotational delay -- 4.17 msec transfer rate 13MB/sec Seek from one track to next 2.2 msec Max. seek time 15 msec Disk access takes about 10 msec whereas accessing memory location takes about 60 nano secs!! Database Management Systems, R. Ramakrishnan and J. Gehrke 7

Arranging Pages on Disk `Next block concept: blocks on same track, followed by blocks on same cylinder, followed by blocks on adjacent cylinder Blocks in a file should be arranged sequentially on disk (by `next ), to minimize seek and rotational delay. For a sequential scan, pre-fetching several pages at a time is a big win! Database Management Systems, R. Ramakrishnan and J. Gehrke 8

RAID RAID: Redundant array of independent/inexpensive disks RAID is a family of techniques for managing multiple disks to provide desirable cost, data availability and performance characteristics 1987: 3 Professors at UC Berkeley published a paper entitled A Case for Redundant Array of Inexpensive Disks Basic idea: to combine multiple small, off-the-shelf, inexpensive disks into an array that outperforms a Single Large Expensive Drive (SLED) Database Management Systems, R. Ramakrishnan and J. Gehrke 9

RAID Disk Array: Arrangement of several disks that gives the abstraction of a single, large disk. Goals: Increase in performance and reliability. Two main techniques: Data striping: Data is partitioned; size of a partition is called the striping unit. It can vary from a sector to several megabytes. Partitions are distributed over several disks. Redundancy: More disks => more failures. Redundant information allows reconstruction of data if a disk fails Database Management Systems, R. Ramakrishnan and J. Gehrke 10

RAID The stripes are interleaved so that disk space is composed of alternate stripes of each drive. Data is written across the stripes instead of onto a single drive. I/O intensive: large stripes to increase concurrent independent I/O s Data Intensive: small stripes to increase access to large records are done in parallel. Will involve multiple disks. Database Management Systems, R. Ramakrishnan and J. Gehrke 11

RAID Benefits Data protection Parity bits are used to regenerate data if a disk fails Availability If a disk fails, the rest are available; Hot spares can join the array Database Management Systems, R. Ramakrishnan and J. Gehrke 12

RAID Levels Level 0 No data redundancy by definition Data is striped across all drives without parity Redundancy: none; data is lost if a drive fails Performance: High; no overhead; parallel access Disk Utilization: 100% Drawbacks: No data redundancy Database Management Systems, R. Ramakrishnan and J. Gehrke 13

RAID Level 1 disk mirroring Data is written to a primary disk and a secondary (mirror) disk. Identical data is stored on both disks Redundancy: mirrored Performance: High for read intensive applications (parallel reads); Disk utilization: 50% Drawbacks: twice the cost for disks; a write involves 2 disk writes No striping Database Management Systems, R. Ramakrishnan and J. Gehrke 14

RAID Level 0+1: Striping and Mirroring Also referred to as raid 10, is a combination of levels 0 and 1 Data is striped across pairs of mirrored disks Redundancy: can sustain more than 1 drive failure (not form the same set) Performance: high; no parity overhead; Parallel reads, a write involves two disks; Maximum transfer rate = aggregate bandwidth Disk Utilization: 50% Drawbacks: high cost per megabyte Database Management Systems, R. Ramakrishnan and J. Gehrke 15

RAID level 2 Bit Interleaving with Error Correction Codes Data is striped at a bit level across an array of disk drives with additional drives inserted in the system for error correcting code or parity data. RAID level 2 is intended for use with drives that don t have built-in error detection. Since all SCSI drives today have built-in error detection, RAID level 2 is of little use Database Management Systems, R. Ramakrishnan and J. Gehrke 16

RAID level 3 Data Striping with Dedicated Parity and Parallel Access Data is striped at a byte level across an array of disk drives with one drive dedicated for parity. Data is accessed in parallel. Redundancy: One drive is dedicated for parity. Data is regenerated in the event of a drive failure. Performance: High performance in data intensive applications because data is accessed in parallel. High transfer rates. Disk Utilization: (N-1)/N (N=Number of disks) Drawbacks: Poor performance in I/O intensive applications because write operations cannot be overlapped due to dedicated parity drive (contention for the parity drive) Database Management Systems, R. Ramakrishnan and J. Gehrke 17

RAID level 4 Data Striping with Dedicated Parity and Independent Access Data is striped at a block level across an array of disk drives with one drive dedicated to parity. Data is accessed independently instead of in parallel. Redundancy: One drive is dedicated for parity. Data is regenerated in the event of a drive failure. Performance: High performance in transaction intensive applications that require high read requests because data is accessed independently. High transaction rate. Disk Utilization: (N-1)/N (N=Number of disks) Drawbacks: Write bottleneck. Write operations cannot be overlapped because there is one parity drive (contention for the parity drive) Database Management Systems, R. Ramakrishnan and J. Gehrke 18

RAID Level 5 Data Striping with Distributed Parity. Data is striped across a group of disk drives with distributed parity. Parity information is written to a different disk in the array for each stripe. Redundancy: Parity is distributed across the disks in the array. Data is regenerated in the event of a drive failure. Performance: High performance in small record, multiprocessing environments because there is no contention for the parity disk and read and write operations can be overlapped. No write bottlenecks as with RAID 4. Disk Utilization: (N-1)/N (N=Number of disks) Drawbacks: Distributed parity causes overhead on write operations. There is also overhead created when data is changed, because parity information must be located and then recalculated. Database Management Systems, R. Ramakrishnan and J. Gehrke 19

RAID Level 6: P+Q Redundancy Data Striping with Distributed Parity. Data is striped across a group of disk drives with distributed parity. Parity information is written to two different disks in the array for each stripe. Reed- Solomon code is used (http://www.4i2i.com/reed_solomon_codes.htm) Redundancy: Parity is distributed across the disks in the array. Data is regenerated in the event of 2 drive failures. Performance: same as RAID 5 Disk Utilization: (N-2)/N (N=Number of disks) Drawbacks: Distributed parity causes overhead on write operations. There is also overhead created when data is changed, because parity information must be located and then recalculated. Database Management Systems, R. Ramakrishnan and J. Gehrke 20

RAID Hardware RAID All RAID functions are handled independent of the host Software RAID Offers low cost but they utilize system resources. They occupy host memory, consume cpu cycle and are OS dependent Database Management Systems, R. Ramakrishnan and J. Gehrke 21

Disk Space Management Lowest layer of DBMS software manages space on disk. Higher levels call upon this layer to: allocate/de-allocate a page read/write a page Request for a sequence of pages must be satisfied by allocating the pages sequentially on disk! Higher levels don t need to know how this is done, or how free space is managed. Database Management Systems, R. Ramakrishnan and J. Gehrke 22

Buffer Management in a DBMS Page Requests from Higher Levels BUFFER POOL disk page free frame MAIN MEMORY DISK DB choice of frame dictated by replacement policy Data must be in RAM for DBMS to operate on it! Mapping of <frame#, pageid> pairs is maintained. Database Management Systems, R. Ramakrishnan and J. Gehrke 23

When a Page is Requested... If requested page is not in pool: Choose a frame for replacement If frame is dirty, write it to disk Read requested page into chosen frame Pin (increment pin_count) the page and return its address. If there is no frame to choose, then the buffer is full. When is the buffer full? If requests can be predicted (e.g., sequential scans) pages can be pre-fetched several pages at a time! Database Management Systems, R. Ramakrishnan and J. Gehrke 24

More on Buffer Management Requestor of page must unpin it, and indicate whether page has been modified: dirty bit is used for this. Page in pool may be requested many times, a pin count is used. A page is a candidate for replacement iff pin count = 0. CC & recovery may entail additional I/O when a frame is chosen for replacement. (Write-Ahead Log protocol; more later.) Database Management Systems, R. Ramakrishnan and J. Gehrke 25

Buffer Replacement Policy Frame is chosen for replacement by a replacement policy: Least-recently-used (LRU), Clock, MRU etc. Policy can have big impact on # of I/O s; depends on the access pattern. Sequential flooding: Nasty situation caused by LRU + repeated sequential scans. # buffer frames < # pages in file means each page request causes an I/O. MRU much better in this situation (but not in all situations, of course). Database Management Systems, R. Ramakrishnan and J. Gehrke 26

DBMS vs. OS File System OS does disk space & buffer mgmt: why not let OS manage these tasks? Differences in OS support: portability issues Some limitations, e.g., files can t span disks. Buffer management in DBMS requires ability to: pin a page in buffer pool, force a page to disk (important for implementing CC & recovery), adjust replacement policy, and pre-fetch pages based on access patterns in typical DB operations. Database Management Systems, R. Ramakrishnan and J. Gehrke 27

Record Formats: Fixed Length F1 F2 F3 F4 L1 L2 L3 L4 Base address (B) Address = B+L1+L2 Information about field types same for all records in a file; stored in system catalogs. Finding i th field requires scan of record. Database Management Systems, R. Ramakrishnan and J. Gehrke 28

Record Formats: Variable Length Two alternative formats (# fields is fixed): Field Count F1 F2 F3 F4 4 $ $ $ $ Fields Delimited by Special Symbols F1 F2 F3 F4 Array of Field Offsets Second offers direct access to i th field, efficient storage of nulls (special don t know value); small directory overhead. Database Management Systems, R. Ramakrishnan and J. Gehrke 29

Subtle Issues: Variable Length What about modification? Growth may involve moving fields What if a record does not fit into the space remaining on a page? (rids and forwarding address) A record grows to occupy more than one page! (chained smaller records) Database Management Systems, R. Ramakrishnan and J. Gehrke 30

Page Formats: Fixed Length Records Slot 1 Slot 2 Slot N Free Space Slot 1 Slot 2...... Slot N Slot M N 1... 0 1 1 M PACKED number of records M... 3 2 1 UNPACKED, BITMAP number of slots Record id (rid) or tule id (tid) = <page id, slot #>. In the first alternative, moving records for free space management changes rid; may not be acceptable. Database Management Systems, R. Ramakrishnan and J. Gehrke 31

Page Formats: Variable Length Records Rid = (i,n) Page i Rid = (i,2) Rid = (i,1) N... 2 1 20 16 24 N # slots SLOT DIRECTORY Can move records on page without changing rid; so, attractive for fixed-length records too. Record cannot be removed from the directory! Why? Pointer to start of free space Database Management Systems, R. Ramakrishnan and J. Gehrke 32

Files of Records Page or block is OK when doing I/O, but higher levels of DBMS operate on records, and files of records. FILE: A collection of pages, each containing a collection of records. Must support: insert/delete/modify record read a particular record (specified using record id) scan all records (possibly with some conditions on the records to be retrieved) Database Management Systems, R. Ramakrishnan and J. Gehrke 33

Unordered (Heap) Files Simplest file structure contains records in no particular order. As file grows and shrinks, disk pages are allocated and de-allocated. To support record level operations, we must: keep track of the pages in a file keep track of free space on pages keep track of the records on a page There are many alternatives for keeping track of this. Database Management Systems, R. Ramakrishnan and J. Gehrke 34

Heap File Implemented as a List Data Page Data Page Data Page Full Pages Header Page Data Page Data Page Data Page Pages with Free Space The header page id and Heap file name must be stored someplace. Each page contains 2 `pointers plus data. Disadvantage for variable length records! (all pages will be on the free list) Database Management Systems, R. Ramakrishnan and J. Gehrke 35

Heap File Using a Page Directory Header Page Data Page 1 Data Page 2 DIRECTORY Data Page N The entry for a page can include the number of free bytes on the page. The directory is a collection of pages; linked list implementation is just one alternative. Much smaller than linked list of all HF pages! Database Management Systems, R. Ramakrishnan and J. Gehrke 36

Indexes A Heap file allows us to retrieve records: by specifying the rid, or by scanning all records sequentially Sometimes, we want to retrieve records by specifying the values in one or more fields, e.g., Find all students in the CS department Find all students with a gpa > 3 Indexes are file structures that enable us to answer such value-based queries efficiently. Database Management Systems, R. Ramakrishnan and J. Gehrke 37

System Catalogs For each index: structure (e.g., B+ tree) and search key fields For each relation: name, file name, file structure (e.g., Heap file) attribute name and type, for each attribute index name, for each index integrity constraints For each view: view name and definition Plus statistics, authorization, buffer pool size, etc. Catalogs are themselves stored as relations! Database Management Systems, R. Ramakrishnan and J. Gehrke 38

Attr_Cat(attr_name, rel_name, type, position) attr_name rel_name type position attr_name Attribute_Cat string 1 rel_name Attribute_Cat string 2 type Attribute_Cat string 3 position Attribute_Cat integer 4 sid Students string 1 name Students string 2 login Students string 3 age Students integer 4 gpa Students real 5 fid Faculty string 1 fname Faculty string 2 sal Faculty real 3 Database Management Systems, R. Ramakrishnan and J. Gehrke 39

Summary Disks provide cheap, non-volatile storage. Random access, but cost depends on location of page on disk; important to arrange data sequentially to minimize seek and rotation delays. Buffer manager brings pages into RAM. Page stays in RAM until released by requestor. Written to disk when frame chosen for replacement (which is sometime after requestor releases the page). Choice of frame to replace based on replacement policy. Tries to pre-fetch several pages at a time. Database Management Systems, R. Ramakrishnan and J. Gehrke 40

Summary (Contd.) DBMS vs. OS File Support DBMS needs features not found in many OS s, e.g., forcing a page to disk, controlling the order of page writes to disk, files spanning disks, ability to control pre-fetching and page replacement policy based on predictable access patterns, etc. Variable length record format with field offset directory offers support for direct access to i th field and null values. Slotted page format supports variable length records and allows records to move on page. Database Management Systems, R. Ramakrishnan and J. Gehrke 41

Summary (Contd.) File layer keeps track of pages in a file, and supports abstraction of a collection of records. Pages with free space identified using linked list or directory structure (similar to how pages in file are kept track of). Indexes support efficient retrieval of records based on the values in some fields. Catalog relations store information about relations, indexes and views. (Information that is common to all records in a given collection.) Database Management Systems, R. Ramakrishnan and J. Gehrke 42