CSE 502 Graduate Computer Architecture. Lec 5-6 Memory Hierarchy Review

Similar documents
CSE 502 Graduate Computer Architecture. Lec 6-7 Memory Hierarchy Review

Performance! (1/latency)! 1000! 100! 10! Capacity Access Time Cost. CPU Registers 100s Bytes <10s ns. Cache K Bytes ns 1-0.

Handout 4 Memory Hierarchy

CSE 502 Graduate Computer Architecture. Lec 7-10 App B: Memory Hierarchy Review

Modern Computer Architecture

COSC 6385 Computer Architecture. - Memory Hierarchies (I)

COEN-4730 Computer Architecture Lecture 3 Review of Caches and Virtual Memory

Review from last lecture. EECS 252 Graduate Computer Architecture. Lec 4 Memory Hierarchy Review. Outline. Example Standard Deviation: Last time

Memory Hierarchy Review

Advanced Computer Architecture

CISC 662 Graduate Computer Architecture Lecture 16 - Cache and virtual memory review

CS252 S05. Main memory management. Memory hardware. The scale of things. Memory hardware (cont.) Bottleneck

Memory Hierarchy. Maurizio Palesi. Maurizio Palesi 1

COSC 6385 Computer Architecture - Memory Hierarchies (I)

Memory Hierarchy. Maurizio Palesi. Maurizio Palesi 1

ECE4680 Computer Organization and Architecture. Virtual Memory

ECE468 Computer Organization and Architecture. Virtual Memory

Administrivia. CMSC 411 Computer Systems Architecture Lecture 8 Basic Pipelining, cont., & Memory Hierarchy. SPEC92 benchmarks

CPE 631 Lecture 04: CPU Caches

Let!s go back to a course goal... Let!s go back to a course goal... Question? Lecture 22 Introduction to Memory Hierarchies

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 1

Caches Part 1. Instructor: Sören Schwertfeger. School of Information Science and Technology SIST

CS152 Computer Architecture and Engineering Lecture 18: Virtual Memory

Computer Architecture Spring 2016

CSF Cache Introduction. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005]

EECS151/251A Spring 2018 Digital Design and Integrated Circuits. Instructors: John Wawrzynek and Nick Weaver. Lecture 19: Caches EE141

Memory hierarchy review. ECE 154B Dmitri Strukov

Memory Hierarchy. ENG3380 Computer Organization and Architecture Cache Memory Part II. Topics. References. Memory Hierarchy

14:332:331. Week 13 Basics of Cache

CS152 Computer Architecture and Engineering Lecture 17: Cache System

CS3350B Computer Architecture

EEC 170 Computer Architecture Fall Cache Introduction Review. Review: The Memory Hierarchy. The Memory Hierarchy: Why Does it Work?

Question?! Processor comparison!

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

14:332:331. Week 13 Basics of Cache

CSE 141 Computer Architecture Spring Lectures 17 Virtual Memory. Announcements Office Hour

EE 4683/5683: COMPUTER ARCHITECTURE

Time. Who Cares About the Memory Hierarchy? Performance. Where Have We Been?

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 2

CS 61C: Great Ideas in Computer Architecture. Virtual Memory

UNIVERSITY OF MASSACHUSETTS Dept. of Electrical & Computer Engineering. Computer Architecture ECE 568/668

COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface

Lecture 17 Introduction to Memory Hierarchies" Why it s important " Fundamental lesson(s)" Suggested reading:" (HP Chapter

CSE 431 Computer Architecture Fall Chapter 5A: Exploiting the Memory Hierarchy, Part 1

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

EITF20: Computer Architecture Part 5.1.1: Virtual Memory

CSF Improving Cache Performance. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005]

Page 1. Review: Address Segmentation " Review: Address Segmentation " Review: Address Segmentation "

Memory Hierarchy Requirements. Three Advantages of Virtual Memory

CPS 104 Computer Organization and Programming Lecture 20: Virtual Memory

CS654 Advanced Computer Architecture. Lec 2 - Introduction

Another View of the Memory Hierarchy. Lecture #25 Virtual Memory I Memory Hierarchy Requirements. Memory Hierarchy Requirements

Lecture 29 Review" CPU time: the best metric" Be sure you understand CC, clock period" Common (and good) performance metrics"

Recap: Set Associative Cache. N-way set associative: N entries for each Cache Index N direct mapped caches operates in parallel

Page 1. Memory Hierarchies (Part 2)

V. Primary & Secondary Memory!

CPS104 Computer Organization and Programming Lecture 16: Virtual Memory. Robert Wagner

EEC 170 Computer Architecture Fall Improving Cache Performance. Administrative. Review: The Memory Hierarchy. Review: Principle of Locality

Memory. Principle of Locality. It is impossible to have memory that is both. We create an illusion for the programmer. Employ memory hierarchy

CS152: Computer Architecture and Engineering Caches and Virtual Memory. October 31, 1997 Dave Patterson (http.cs.berkeley.

Time. Recap: Who Cares About the Memory Hierarchy? Performance. Processor-DRAM Memory Gap (latency)

CS61C Review of Cache/VM/TLB. Lecture 26. April 30, 1999 Dave Patterson (http.cs.berkeley.edu/~patterson)

Course Administration

Computer Organization and Structure. Bing-Yu Chen National Taiwan University

Lecture 11. Virtual Memory Review: Memory Hierarchy

Chapter 5A. Large and Fast: Exploiting Memory Hierarchy

10/19/17. You Are Here! Review: Direct-Mapped Cache. Typical Memory Hierarchy

Chapter 5 Large and Fast: Exploiting Memory Hierarchy (Part 1)

Agenda. CS 61C: Great Ideas in Computer Architecture. Virtual Memory II. Goals of Virtual Memory. Memory Hierarchy Requirements

The Memory Hierarchy & Cache Review of Memory Hierarchy & Cache Basics (from 350):

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 3

CS152 Computer Architecture and Engineering. Lecture 15 Virtual Memory Dave Patterson. John Lazzaro. www-inst.eecs.berkeley.

Page 1. Multilevel Memories (Improving performance using a little cash )

Chapter Seven. Memories: Review. Exploiting Memory Hierarchy CACHE MEMORY AND VIRTUAL MEMORY

ECE ECE4680

CS 61C: Great Ideas in Computer Architecture Caches Part 2

LECTURE 4: LARGE AND FAST: EXPLOITING MEMORY HIERARCHY

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 2

Topics. Digital Systems Architecture EECE EECE Need More Cache?

Topics. Computer Organization CS Improving Performance. Opportunity for (Easy) Points. Three Generic Data Hazards

Review: Performance Latency vs. Throughput. Time (seconds/program) is performance measure Instructions Clock cycles Seconds.

CS 152 Computer Architecture and Engineering

MEMORY HIERARCHY BASICS. B649 Parallel Architectures and Programming

CS161 Design and Architecture of Computer Systems. Cache $$$$$

CS61C : Machine Structures

Memory Technology. Chapter 5. Principle of Locality. Chapter 5 Large and Fast: Exploiting Memory Hierarchy 1

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

CS 61C: Great Ideas in Computer Architecture. Direct Mapped Caches

Memory Hierarchy Motivation, Definitions, Four Questions about Memory Hierarchy

10/16/17. Outline. Outline. Typical Memory Hierarchy. Adding Cache to Computer. Key Cache Concepts

1. Memory technology & Hierarchy

CS162 Operating Systems and Systems Programming Lecture 14. Caching (Finished), Demand Paging

Mo Money, No Problems: Caches #2...

Computer Architecture Computer Science & Engineering. Chapter 5. Memory Hierachy BK TP.HCM

Textbook: Burdea and Coiffet, Virtual Reality Technology, 2 nd Edition, Wiley, Textbook web site:

10/11/17. New-School Machine Structures. Review: Single Cycle Instruction Timing. Review: Single-Cycle RISC-V RV32I Datapath. Components of a Computer

Memory Hierarchy: Motivation

Memory Hierarchies 2009 DAT105

Transcription:

CSE 502 Graduate Computer Architecture Lec 5-6 Memory Hierarchy Review Larry Wittie Computer Science, StonyBrook University http://www.cs.sunysb.edu/~cse502 and ~lw Slides adapted from David Patterson, UC-Berkeley cs252-s06

Review from last lecture Quantify and summarize performance Ratios, Geometric Mean, Multiplicative Standard Deviation F&P: Benchmarks age, disks fail,1 point fail danger Control VIA State Machines and Microprogramming Just overlap tasks; easy if tasks are independent Speed Up Pipeline Depth; if ideal CPI is 1, then: Pipeline depth Speedup = 1 + Pipeline stall CPI Cycle Cycle Time Time unpipelined pipelined Hazards limit performance on computers: Structural: need more HW resources Data (RAW,WAR,WAW): need forwarding, compiler scheduling Control: delayed branch, prediction Exceptions, Interrupts add complexity 2/15-17/2011 CSE502-S11, Lec 05+6-cache VM TLB 2

Outline Review Memory hierarchy Locality Cache design Virtual address spaces Page table layout TLB design options Conclusion 2/15-17/2011 CSE502-S11, Lec 05+6-cache VM TLB 3

Memory Hierarchy Review 2/15-17/2011 CSE502-S11, Lec 05+6-cache VM TLB 4

Since 1980, CPU has outpaced DRAM... Performance (1/latency) 1000 100 10 Q. How do architects address this gap? A. Put smaller, faster cache memories between CPU and DRAM. Create a memory hierarchy. CPU CPU 60% per yr 2X in 1.5 yrs Latency gap grew 50% per year DRAM 9% per yr DRAM 2X in 10 yrs Year 2/15-17/2011 CSE502-S11, Lec 05+6-cache VM TLB 5

1977: DRAM faster than microprocessors Apple (1977) Latencies CPU clock: 1000 ns DRAM access: 400 ns Steve Jobs Steve Wozniak 2/15-17/2011 CSE502-S11, Lec 05+6-cache VM TLB 6

Levels of the Memory Hierarchy Capacity Access Time Cost CPU Registers 100s Bytes <10s ns Cache K Bytes 10-100 ns 1-0.1 cents/bit Main Memory M Bytes 200ns- 500ns $.0001-.00001 cents /bit Disk G Bytes, 10 ms (10,000,000 ns) 10-5 -6-10 cents/bit Tape Almost infinite sec-min -8 10 cents/bit Registers Cache Memory Disk Tape Instr. Operands Blocks Pages Files Staging Xfer Unit prog./compiler 1-8 bytes cache cntl 8-128 bytes OS 512-4K bytes user/operator Mbytes Upper Level faster Larger Lower Level 2/15-17/2011 CSE502-S11, Lec 05+6-cache VM TLB 7

Memory Hierarchy: Apple imac G5 (2004-5) Managed by compiler Managed by hardware Managed by OS, hardware, application 1977+27yr Reg L1 Inst L1 Data L2 DRAM Disk Size in Bytes 1K 64K 32K 512K 256M 80G Latency in Cycles, Time 1 cyc, 0.6 ns 3 cyc, 1.9 ns 3 cyc, 1.9 ns 11 cyc, 6.9 ns 88 cyc, 55 ns 10 7 cyc, 12 ms Goal: Illusion of large, fast, cheap memory Let programs address a memory space that scales to the disk size, at a speed that is usually nearly as fast as register access imac G5 1.6 GHz 1600 (mem: 7.3) x Apple II imac G5 1.6 GHz clock, 55 ns DRAM vs Apple II 1 MHz, 400ns DRAM Perform: CPU 1600 X, DRAM 7.3 X faster in 27 yrs => 2X/ 2.5y, 9.3y 2/15-17/2011 CSE502-S11, Lec 05+6-cache VM TLB 8

imac s PowerPC 970 (G5): All caches on-chip L1 (64K Instruction) 1/2 KB R eg ist er s 512K L2 1/2 KB L1 (32K Data) Original = 64-bit single-core 2/15-17/2011 CSE502-S11, Lec 05+6-cache VM TLB 9

The Principle of Locality The Principle of Locality: Program access a relatively small portion of the address space at any instant of time. Two Different Types of Locality: Temporal Locality (Locality in Time): If an item is referenced, it will tend to be referenced again soon (e.g., loops, reuse) Spatial Locality (Locality in Space): If an item is referenced, items whose addresses are close by tend to be referenced soon (e.g., straightline code, array access) For last 20 years, HW has relied on locality for speed by using fast cache copies of memory parts to lower average memory access time (AMAT) for programs Locality is a property of programs which is exploited in machine design. 2/15-17/2011 CSE502-S11, Lec 05+6-cache VM TLB 10

Programs with locality cache well... Memory Address (one dot per access) Space Time Bad locality behavior Spatial Locality Donald J. Hatfield, Jeanette Gerald: Program Restructuring for Virtual Memory. IBM Systems Journal 10(3): 168-192 (1971) Temporal Locality 2/15-17/2011 CSE502-S11, Lec 05+6-cache VM TLB 11 Time=>

Memory Hierarchy: Terminology Hit: data appears in some block in the upper level (example: Block X) Hit Rate: the fraction of memory accesses found in the upper level Hit Time: Time to access the upper level which consists of RAM access time + Time to determine hit/miss Miss: data needs to be retrieved from a block in the lower level (Block Y) Miss Rate = 1 - (Hit Rate) Miss Penalty: Time to replace a block in the upper level + Time to deliver the block to the upper level Hit Time << Miss Penalty(=500 instructions on 21264!) To Processor From Processor Upper Level Memory Blk X Lower Level Large Size Memory Blk Y 2/15-17/2011 CSE502-S11, Lec 05+6-cache VM TLB 12

CSE502: Administrivia Instructor: Prof Larry Wittie Office/Lab: 1308 CompSci, lw AT icdotsunysbdotedu Office Hrs: TuTh, 3:45-5:15 pm, if door open, or appt. T. A.: None Class: TuTh, 2:20-3:40 pm 2120 Comp Sci Text: Computer Architecture: A Quantitative Approach, 4th Ed. (Oct, 2006), ISBN 0123704901 or 978-0123704900, $65/($50?) Amazon; $77used SBU, S11 Web page: Current reading: Appendix C (back of CAQA4) (Chap 1 was first week, App A was last week) Tue 2/22 or Th 2/24: After review, Quiz (Appendices A & C) Reading: Memory Hierarchy, Appendix C this week Reading assignment: Chapter 2 for Thur 2/24 2/15-17/2011 CSE502-S11, Lec 05+6-cache VM TLB 13

Cache Measures Hit rate: fraction found in that level So high that usually talk about Miss rate = 1 - Hit rate Miss rate fallacy: as MIPS to CPU performance, miss rate to average memory access time in memory Average memory-access time = Hit time + Miss rate x Miss penalty (ns or clocks) Miss penalty: time to supply a missed block from lower level, including any CPU-visible delays to save replaced write-back data to make room in upper level cache. { All active caches are full } {replacement time: time to make upper-level room for new block} access time: time to lower level = f(latency to lower level) transfer time: time to transfer block =f(bw between upper & lower levels) 2/15-17/2011 CSE502-S11, Lec 05+6-cache VM TLB 14

Four Questions for Memory Hierarchy (relative to both caches and DRAM for virtual memory) Q1: Where can a block be placed in the upper level? (Block placement) Q2: How is a block found if it is in the upper level? (Block identification) Q3: Which block should be replaced on a miss? (Block replacement) Q4: What happens on a write? (Write strategy) 2/15-17/2011 CSE502-S11, Lec 05+6-cache VM TLB 15

Q1: Where can a block be placed in the upper level? Memory block 12 placed in each 8-block cache: 3 cache organizations are shown below: Fully associative, direct mapped, 2-way set associative (S.A.) S.A. Mapping = Block Number Modulo (Number of Sets) (Cache blocks allowed to hold block 12, shown in blue.) Direct Mapped 2-Way Set Assoc Full Associative (12 mod 8) = 4 (12 mod 4=8/2) = 0 01234567 01234567 01234567 Cache Memory 1111111111222222222233 01234567890123456789012345678901 2/15-17/2011 CSE502-S11, Lec 05+6-cache VM TLB 16

Q2: How tell if block is in upper level cache? Bits = 18b: tag 8b index: 256 entries/cache ( 4b: 16 wds/block 2b: 4 Byte/wd) IMPORTANT SLIDE! or ( 6b: 64 Bytes/block 6 offset bits) Bits: (1-way) Direct Mapped Cache 16KB = 2 8=index sets x 1 blk/set x 2 6=4+2=offset B/blk Index => cache set Location of all possible blocks Must check tag for each block: No need to check index, offset bits Increasing associativity: Shrinks index & expands tag size Hit V Tag 18 bits Tag = 16 18 Address (showing bit positions) 31 14 13 6 5 2 1 0 Index 18 8 4 Byte offset 32 32 Block (a.k.a. Page) Address Tag Index Offset Bits of Bytes in Page 2/15-17/2011 CSE502-S11, Lec 05+6-cache VM TLB 17 Data 512 bits Mux 32 Data Capacity: 16KB = 256 x 1 x 64 Bytes 32 Block offset Bit Fields in Memory Address Used to Access Cache Word Virtual Memory Cache Block 256 entries Data

Q3: Which block to replace after a miss? (After start up, cache is nearly always full) Easy if Direct Mapped (only 1 block 1 way per set index ) If Set Associative or Fully Associative, must choose: Random ( Ran ) Easy to implement, but not best. If only 2-way: 1bit/way LRU (Least Recently Used) LRU is best, but hard to implement if > 8-way Also other LRU approximations better than Random Miss Rates for 3 Cache Sizes & Associativities Associativity 2-way 4-way 8-way DataSize LRU Ran LRU Ran LRU Ran 16 KB 5.2% 5.7% 4.7% 5.3% 4.4% 5.0% 64 KB 1.9% 2.0% 1.5% 1.7% 1.4% 1.5% 256 KB 1.15% 1.17% 1.13% 1.13% 1.12% 1.12% Random picks => same low miss rate as LRU for large caches 2/15-17/2011 CSE502-S11, Lec 05+6-cache VM TLB 18

Q4: Write policy: What happens if write (store) data? Write-Through Write-Back Policy Data word written to cache block is also written to next lower-level memory Example, instr. sw to L1$ also goes to L2$ Write new data word only to 1 cache block Update lower level just before a written block leaves cache, so not lose true value Debugging Easier Harder Can read misses force writes? No Yes (used to slow some reads; now write-buffer) Do repeated writes touch lower level? Yes, memory busier No Additional option -- let writes to an un-cached address allocate a new cache line ( write-allocate ), else just Write-Through. 2/15-17/2011 CSE502-S11, Lec 05+6-cache VM TLB 19

Write Buffers for Write-Through Caches Processor Cache Write Buffer Lower Level Memory Write buffer holds (addresses&) data awaiting write-through to lower levels Q. Why a write buffer? Q. Why a buffer, why not just one register? Q. Are Read After Write (RAW) hazards an issue for write buffer? A. So CPU not stall for writes A. Bursts of writes are common before write all. A. Yes! Drain buffer before next read or check buffer addresses before read-miss. 2/15-17/2011 CSE502-S11, Lec 05+6-cache VM TLB 20

5 Basic Cache Optimizations Reducing Miss Rate 1. Larger Block size (reduce Compulsory, cold, misses) 2. Larger Cache size (reduce Capacity misses) 3. Higher Associativity (reduce Conflict misses) ( and multiprocessors have cache Coherence misses) (4 Cs) Reducing Miss Penalty 4. Multilevel Caches {total miss rate = (local miss rate k ), where means product of all items k, for k = 1 to max. } Reducing Hit Time (minimize cache latency) 5. Giving Reads Priority over Writes, since CPU is waiting. Read completes before earlier writes in write buffer 2/15-17/2011 CSE502-S11, Lec 05+6-cache VM TLB 21

Outline Review Memory hierarchy Locality Cache design Virtual address spaces Page table layout TLB design options Conclusion 2/15-17/2011 CSE502-S11, Lec 05+6-cache VM TLB 22

The Limits of Physical Addressing Programs use Physical addresses of memory locations A0-A31 CPU D0-D31 Simple addressing method of archaic pre-1978 computers Data A0-A31 Memory D0-D31 All programs shared one address space: The physical address space Machine language programs had to be aware of the machine organization No way to prevent a program from accessing any machine resource in memory 2/15-17/2011 CSE502-S11, Lec 05+6-cache VM TLB 23

Solution: Add a Layer of Indirection Virtual Addresses Physical Addresses A0-A31 CPU D0-D31 Virtual Physical Address Translation A0-A31 Main Memory D0-D31 Data All user programs run in an standardized virtual address space starting at zero. Needs fast(!) Address Translation hardware, managed by the operating system (OS), to map each virtual address to physical memory Hardware supports modern OS features: Memory protection, Address translation, Sharing 2/15-17/2011 CSE502-S11, Lec 05+6-cache VM TLB 24

Three Advantages of Virtual Memory Translation: Program can be given consistent view of memory, even though physical memory is scrambled (pages of programs in any order in physical RAM) Makes multithreading reasonable (now used a lot!) Only the most important part of each program ( the Working Set ) must be in physical memory at any one time. Contiguous structures (like stacks) use only as much physical memory as necessary, yet still can grow later as needed, without recopying. Protection (most important now): Different threads (or processes) protected from each other. Different pages can be given special behavior Examples (Read Only, Invisible to user programs, Not cached). Kernel and OS data are protected from access by User programs Very important for protection from malicious programs Sharing: Can map same physical page to multiple users ( Shared memory holding C++ compiler for many users at once.) 2/15-17/2011 CSE502-S11, Lec 05+6-cache VM TLB 25

Page tables encode mappings of code s virtual addresses to physical memory address space virtual address Page Table page page page page OS manages the page table for each V.A.S. ID Physical Memory Space frame frame frame frame A virtual address space (V.A.S.) is divided into blocks of memory called pages A machine usually supports pages of a few sizes (MIPS R4000) A page table is indexed by a virtual address A valid page table entry codes the present physical memory frame address for the page 2/15-17/2011 CSE502-S11, Lec 05+6-cache VM TLB 26

virtual address Page Table page page page page Details of Page Table Physical Memory Space frame frame frame frame Page Table Ptr In Base Reg Virtual Address (for 4,096 Bytes/page) 12 V page no. offset index into page table Page Table 0 V Access Rights Page table maps virtual page numbers to physical frames ( PTE = Page Table Entry) Virtual memory treats main memory cache for disk 2/15-17/2011 CSE502-S11, Lec 05+6-cache VM TLB 27 PA table located in physical memory (V is valid bit) (Byte offset same in VA & PA) Ph page no. offset 12 Physical Address

All page tables may not fit in memory! A table for 4KB pages for a 32-bit physical address space (max 4GB) has 1M entries Each process needs its own address space tables! Two-level Page Tables 32 bit virtual address 31 22 21 12 11 0 P1 index P2 index Page Offset Single top-level table wired (stays) in main memory Only a subset of the 1024 second-level tables are in main memory; rest are on disk or unallocated 2/15-17/2011 CSE502-S11, Lec 05+6-cache VM TLB 28

VM and Disk: Page replacement policy Head pointer: Place page on free list if used bit is still clear (=0). Schedule freed pages with dirty bit set to be written to disk. Set of all pages in Memory Dirty bit: page written. Used bit: set to 1 on any reference Tail pointer: Clear (=0) used bits" in page table, says maybe not used recently. Architect s role: support setting dirty and used bits dirty used 1 0 1 0 0 1 1 1 0 0 Page Table 2/15-17/2011 CSE502-S11, Lec 05+6-cache VM TLB 29... Freelist Free Pages

TLB Design Concepts 2/15-17/2011 CSE502-S11, Lec 05+6-cache VM TLB 30

MIPS Address Translation: How does it work? Virtual Addresses Physical Addresses A0-A31 CPU D0-D31 Virtual Physical TLB also contains protection bits for virtual address A0-A31 Translation Look-Aside Memory Buffer D0-D31 (TLB) Data What is the Per-Process Translation Look-Aside Buffer (TLB) table of mappings A small very fast fully-associative cache of mappings from virtual to physical addresses Fast common case: If virtual address is in TLB, process has permission to read/write it. 2/15-17/2011 CSE502-S11, Lec 05+6-cache VM TLB 31 that it caches? Recently used VA PA entries.

The TLB caches page table entries Physical and virtual pages must be the same size! Here, 1024 Bytes/page each. TLB caches page table entries. virtual address page offset Page Table for current AddrSpaceID V Physical frame address 2 0 1 3 TLB PAdd frame Vadd page 2 2 0 5 physical address frame offset MIPS handles TLB misses in software (random replacement). Other machines use hardware. V=0 pages reside on disk or have not yet been allocated to this ASID. OS handles V=0 as a Page fault 2/15-17/2011 CSE502-S11, Lec 05+6-cache VM TLB 32

Can TLB translation overlap cache indexing? {Maybe if cache is tiny.} Virtual Page Number Page Offset Tag Part of Physical Addr = Physical Page Number Index Byte Select Virtual Translation Look-Aside Buffer (TLB) Physical Cache Tags Valid Cache Data Cache Block Cache Tag = Cache Block Having cache index in page offset works, but Q. What is the downside? A. Inflexibility. Size of cache limited by page size. Data out 2/15-17/2011 CSE502-S11, Lec 05+6-cache VM TLB 33 Hit

Problems With Overlapped TLB Access Overlapped access only works so long as the address bits used to index into the cache do not change as the result of VA translation This usually limits overlapping to small caches, large page sizes, or high n-way set associative caches if you want a large capacity cache Example: suppose everything the same except that the cache is increased to 8 KBytes instead of 4 KB: 11 2 cache index 20 12 virt page # disp 00 Solutions: go to 8KByte page sizes; go to 2-way set associative cache; or SW guarantee VA[13]=PA[13] This bit is changed by VA translation, but it is needed for cache lookup. 10 4 4 1K 2-way set assoc cache 2/15-17/2011 CSE502-S11, Lec 05+6-cache VM TLB 34

Can CPU use virtual addresses for cache? Virtual Addresses Physical Addresses A0-A31 CPU D0-D31 Virtual Cache D0-D31 Virtual Physical Translation Look-Aside Buffer (TLB) A0-A31 Main Memory D0-D31 If cache index in virtual address, only cache misses use TLB! Downside: a subtle, fatal problem. What is it? (Aliasing) A. Synonym problem. If two address spaces share a physical frame, data may be in cache twice. Maintaining consistency is a nightmare. 2/15-17/2011 CSE502-S11, Lec 05+6-cache VM TLB 35

Summary #1/3: The Cache Design Space Several interacting dimensions cache size block size associativity replacement policy write-through vs write-back write allocation The optimal choice is a compromise depends on access characteristics» workload» use (I-cache, D-cache, TLB) depends on technology / cost Simplicity often wins Bad Good Cache Size Factor A Less Associativity Block Size Factor B More 2/15-17/2011 CSE502-S11, Lec 05+6-cache VM TLB 36

Summary #2/3: Caches The Principle of Locality: Program access a relatively small portion of the address space at any instant of time.» Temporal Locality: Locality in Time» Spatial Locality: Locality in Space Three Major Uniprocessor Categories of Cache Misses: Compulsory Misses: sad facts of life. Example: cold start misses. Capacity Misses: increase cache size Conflict Misses: increase cache size and/or associativity. Nightmare Scenario: ping pong effect! Coherence Misses (in multiprocessors): Avoid frequent use of shared variables in parallel programs. Write Policy: Write Through vs. Write Back Today CPU time is a function of (ops, cache misses) vs. just f(ops): Increasing performance affects Compilers, Data structures, and Algorithms 2/15-17/2011 CSE502-S11, Lec 05+6-cache VM TLB 37

Summary #3/3: TLB, Virtual Memory Page tables map virtual address to physical address TLBs are important for fast translation TLB misses are significant in processor performance This decade is a funny time, since most systems cannot access all of 2nd level cache without TLB misses! The answer in newer processors is 2-levels of TLB. Caches, TLBs, Virtual Memory all understood by examining how they deal with four questions: 1) Where can a block be placed? 2) How is a block found? 3) What block is replaced on a miss? 4) How are writes handled? Today VM allows many processes to share single memory without having to swap all processes to disk; today VM protection is more important than memory hierarchy benefits, but computers are still insecure Short in-class openbook quiz on appendices A-C & Chapter 1 near start of next (2/22 or 2/24) class. Bring a calculator. (Please put your best email address on your exam.) 2/15-17/2011 CSE502-S11, Lec 05+6-cache VM TLB 38