Database Systems and Modern CPU Architecture Prof. Dr. Torsten Grust Winter Term 2006/07
Hard Disk 2 RAM
Administrativa Lecture hours (@ MI HS 2): Monday, 09:15 10:00 Tuesday, 14:15 15:45 No lectures on Nov, 20 21, 2006 Tutorial/Lab (Jens Teubner, @ MW 1450): Thursday, 10:15 11:45 3
Administrativa Course homepage: http://www-db.in.tum.de/cms/teaching/ ws0607/mmdbms Contact: Torsten Grust grust@in.tum.de Jens Teubner jens.teubner@in.tum.de Rooms: 02.11.044, 02.11.042 (drop in if doors open) 4
Course Prerequisites These courses will be helpful in following the course but are not strictly (or even formally) required: 1. IN0004: Einführung in die Technische Informatik CPU architecture, assembly, memory hierarchy 2. IN0008: Grundlagen: Datenbanken Query processing, buffer mgmt, index structures 5
Assembly Language Here and there we will analyze snippets of (mostly MIPS-style) assembly language programs. LD R1,0(R2) ;Regs[R1] M[Regs[R2]+0] DSUB R4,R1,R5 ;Regs[R4] Regs[R1]-Regs[R5] AND R6,R1,R7 ;Regs[R6] Regs[R1]&Regs[R7] ORI R8,R1,255 ;Regs[R8] Regs[R1] 255 We will also look at Intel IA-32 and Itanium (IA-64). 6
Reading Material The CPU architecture and memory hierarchy aspects of this course are largely covered by Computer Architecture, 3rd ed A Quantitative Approach John L. Hennessy, David A. Patterson Morgan Kaufmann, 2003 (Chapters 1 5, Appendix A) 7
Reading Material Aspects of database technology are mainly discussed in a number of research papers. References will be given here, download the papers from the course homepage. (Helps to appreciate the details but not necessary to pass the exam.) 8
Tutorials & Assignments Tutorial sessions will try to be as hands-on as possible: - MonetDB - Mini programming exercises (language: C) - Try CPU performance and event counting, etc. We will hand out weekly assignments. There will be no grading but Jens will develop and discuss solutions with you. 9
Examination Examination (Klausur): Thursday, Feb 8, 2007 10:15 11:45 @ MW 1450 No formal requirements to take the exam (although it is highly advisable to actively work on the assignments). 10
Hard Disk RAM Today, it is perceivable to build database systems that primarily operate in main memory. In such systems, there is no central role for (disk) I/O management any longer. Instead, main memory database systems (MMDBMS) performance would be determined by other system components: the CPU and the memory hierarchy. 11
A Database in Primary Memory? Commodity hardware typically comes with primary memory sizes beyond 1 GB. Since the principle of locality applies to programs and data ( 90% of all database operations touch 10% of the data ), most database hot sets easily fit into RAM. Even further: The author of A Database in the CPU Cache might come to Garching and try to convince you that a DBMS needs a tiny fraction of RAM, only. 12
The Principle of Locality 1. Temporal Locality: Recently accessed items are likely to be addressed in the future. 2. Spatial Locality: Items whose addresses are near one another tend to referenced close together in time. Based on recent past, we can predict with reasonable accuracy which data will be touched (read/written) in the near future. 13
I/O Latency Dominates Everything 10 000 /min 14
Lack of I/O Latency...... promises fabulous performance figures for MMDBMS. MMDBMS, like MonetDB (CWI Amsterdam), indeed exhibit query performance improvements of two orders of magnitude over commercial disk-based DBMS. But! The DBMS internals need to be carefully engineered to realize this potential. 15
MonetDB: Binary Relations Only Designed as a relational MMDBMS from the ground up, many design decisions in MonetDB seem peculiar. All tables exactly have two columns (binary relations). These columns are named head (h) and tail (t). Most operators (e.g., select()) implicitly act on the head (tail) column of a table. 16
MonetDB: Binary Relations Only 17
MonetDB: Design Decisions Details of CPU and main-memory architecture drove the development of MonetDB: 1. The narrower the tuples, the more tuples will fit into a tiny fraction of RAM (e.g., the CPU cache). 2. Primitive operators spend less CPU cycles per tuple and behave in a predictable fashion. 18
CPU and Memory Performance Diverges Since 1986, CPU performance improved by a factor of 1.55/year (55%/year). DRAM (Dynamic RAM) access speed improves by about 7%/year. Modern CPUs spend larger and larger fractions time to wait for memory reads and writes to complete (memory latency). 19
The CPU Memory Speed Gap 20
Principle of Locality Comes to the Rescue Design a hierarchical memory system, based on memories of different speed and sizes. 21
Memory Access The New Bottleneck Memory access beyond the CPU cache is easily worth 100s of CPU instructions accessing disk-based memory accounts for 1 million instructions. Current and future hardware trends make this worse. If the DBMS needs to perform costly memory access, 1. make sure to use all data moved into the cache/cpu, 2. try to access memory in a predictable fashion (prefetching). 22
Instruction-Level Parallelism Modern CPUs e.g., Intel s Itanium 2 or Pentium 4 feature execution pipelines which ideally can complete 1 instruction per cycle (IPC): 1. Itanium 2 max 6 instructions execute in 7-stage pipeline: 6 7 = 42 instructions execute in parallel 2. Pentium 4 max 3 instructions execute in 31-stage pipeline: 3 31 = 93 instructions execute in parallel Such parallelism cannot always be found in (database) code. 23
Tracing MySQL In a simple SQL query like the following, MySQL will call a dedicated routine to perform the addition for each tuple individually: SELECT A + B FROM R The query engines first uses helper routines like rec_get_nth_field() to copy data in and out of MySQL s internal record representation. 24
Slow Addition in MySQL An inherent problem of the MySQL query engine is its one-tuple-at-a-time approach: foreach r R { s := Item_func_plus_val(r.A,r.B); } - Each invocation experiences its data dependencies in isolation no potential parallelism. 25
Tracing MySQL The addition itself, performed by routine Item_func_plus::val(), is found to take 50 CPU cycles: - Calling and returning from Item_func_plus::val() accounts for 30 CPU cycles. - Addition consumes the remaining CPU cycles. 26
Data Dependencies Trace was performed on MIPS R12000 CPU: - Can perform 3 ALU (arithmetic) and 1 load/store operation/cycle. Avg. instruction latency: 5 cycles. LD R1,<src1> LD R2,<src2> ADD R3,R2,R1 SD R3,<dst> ; R1 <src1> ; R2 <src2> ; R3 R1+R2 ; <dst> R3 data dependency 27
Loop Unrolling Unrolling the tuple-at-a-time loop and expanding the code for Item_func_plus::val() reveals that there is no data dependency between additions of different tuples: s[n] := r[n].a + r[n].b; s[n+1] := r[n+1].a + r[n+1].b; s[n+2] := r[n+2].a + r[n+2].b; 28
Instruction Scheduling Let the CPU or the compiler schedule dependent instructions such that instruction latency is hidden: LD R1,<src1> LD R2,<src2> ADD R3,R2,R1 LD R1,<src3> LD R2,<src4> ADD R4,R2,R1 LD R1,<src5> LD R2,<src6> SD R3,<dst1> ADD R3,R2,R1 LD R1,<src7> LD R2,<src8> SD R4,<dst2> 29 One addition completes every 3 4 CPU cycles.
Course Syllabus (1) Chapter 0: Introduction and Motivation Chapter 1: CPU Architecture and Instruction Sets - CPU performance, instruction set principles, RISC Chapter 2: Pipelining and Instruction-Level Parallelism (ILP) - CPU pipelines, data and control hazards, parallelism, instruction scheduling, branch prediction, super-scalar CPUs 30
Course Syllabus (2) Chapter 3: Database Systems: Where Does Time Go? (Part I) - CPU usage, stalls, and misprediction in DBMSs Chapter 4: How Database Systems Can Take Advantage of ILP - Vectorized processing, SIMD instructions, predictable code, compression [MonetDB, X100] 31
Course Syllabus (3) Chapter 5: The Memory Hierarchy (Close to the CPU) - Caches, (reducing) miss rate and penalty, loop reorganization, virtual memory, TLBs Chapter 6: Database Systems: Where Does Time Go? (Part 2) - Memory access behavior of database operators, impact of data layout 32
Course Syllabus (4) Chapter 7: How Database Systems Can Exploit the Memory Hierarchy - Data placement, column storage, database operation buffering, prefetching, compiler techniques [MonetDB, X100] 33