Memory Hierarchy Review

Similar documents
COEN-4730 Computer Architecture Lecture 3 Review of Caches and Virtual Memory

Performance! (1/latency)! 1000! 100! 10! Capacity Access Time Cost. CPU Registers 100s Bytes <10s ns. Cache K Bytes ns 1-0.

Handout 4 Memory Hierarchy

Modern Computer Architecture

COSC 6385 Computer Architecture. - Memory Hierarchies (I)

CS162 Operating Systems and Systems Programming Lecture 13. Caches and TLBs. Page 1

CSE 502 Graduate Computer Architecture. Lec 6-7 Memory Hierarchy Review

Advanced Computer Architecture

CS162 Operating Systems and Systems Programming Lecture 14. Caching and Demand Paging

CS162 Operating Systems and Systems Programming Lecture 10 Caches and TLBs"

CSE 502 Graduate Computer Architecture. Lec 5-6 Memory Hierarchy Review

Page 1. Review: Address Segmentation " Review: Address Segmentation " Review: Address Segmentation "

Review from last lecture. EECS 252 Graduate Computer Architecture. Lec 4 Memory Hierarchy Review. Outline. Example Standard Deviation: Last time

EECS151/251A Spring 2018 Digital Design and Integrated Circuits. Instructors: John Wawrzynek and Nick Weaver. Lecture 19: Caches EE141

Paging! 2/22! Anthony D. Joseph and Ion Stoica CS162 UCB Fall 2012! " (0xE0)" " " " (0x70)" " (0x50)"

CSF Cache Introduction. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005]

Recall: Paging. Recall: Paging. Recall: Paging. CS162 Operating Systems and Systems Programming Lecture 13. Address Translation, Caching

EEC 170 Computer Architecture Fall Cache Introduction Review. Review: The Memory Hierarchy. The Memory Hierarchy: Why Does it Work?

CS162 Operating Systems and Systems Programming Lecture 14. Caching (Finished), Demand Paging

CSE 502 Graduate Computer Architecture. Lec 7-10 App B: Memory Hierarchy Review

CS252 S05. Main memory management. Memory hardware. The scale of things. Memory hardware (cont.) Bottleneck

Memory Hierarchy. Maurizio Palesi. Maurizio Palesi 1

CS162 Operating Systems and Systems Programming Lecture 14. Caching and Demand Paging

CPE 631 Lecture 04: CPU Caches

COSC 6385 Computer Architecture - Memory Hierarchies (I)

Page 1. CS162 Operating Systems and Systems Programming Lecture 14. Caching and Demand Paging

EE 4683/5683: COMPUTER ARCHITECTURE

Memory Hierarchy. Maurizio Palesi. Maurizio Palesi 1

CISC 662 Graduate Computer Architecture Lecture 16 - Cache and virtual memory review

10/7/13! Anthony D. Joseph and John Canny CS162 UCB Fall 2013! " (0xE0)" " " " (0x70)" " (0x50)"

14:332:331. Week 13 Basics of Cache

COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface

CS3350B Computer Architecture

CS162 Operating Systems and Systems Programming Lecture 13. Address Translation (con t) Caches and TLBs

14:332:331. Week 13 Basics of Cache

Computer Architecture Spring 2016

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Memory Hierarchy. ENG3380 Computer Organization and Architecture Cache Memory Part II. Topics. References. Memory Hierarchy

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Caches Part 1. Instructor: Sören Schwertfeger. School of Information Science and Technology SIST

CS152 Computer Architecture and Engineering Lecture 17: Cache System

Course Administration

Page 1. Memory Hierarchies (Part 2)

CSF Improving Cache Performance. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005]

CS 152 Computer Architecture and Engineering. Lecture 7 - Memory Hierarchy-II

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 1

Time. Who Cares About the Memory Hierarchy? Performance. Where Have We Been?

CSE 431 Computer Architecture Fall Chapter 5A: Exploiting the Memory Hierarchy, Part 1

Page 1. Multilevel Memories (Improving performance using a little cash )

CS152: Computer Architecture and Engineering Caches and Virtual Memory. October 31, 1997 Dave Patterson (http.cs.berkeley.

Time. Recap: Who Cares About the Memory Hierarchy? Performance. Processor-DRAM Memory Gap (latency)

Page 1. Goals for Today" TLB organization" CS162 Operating Systems and Systems Programming Lecture 11. Page Allocation and Replacement"

ECE4680 Computer Organization and Architecture. Virtual Memory

Memory Technologies. Technology Trends

CPS104 Computer Organization and Programming Lecture 16: Virtual Memory. Robert Wagner

Operating Systems (1DT020 & 1TT802) Lecture 9 Memory Management : Demand paging & page replacement. Léon Mugwaneza

ECE468 Computer Organization and Architecture. Virtual Memory

EEC 170 Computer Architecture Fall Improving Cache Performance. Administrative. Review: The Memory Hierarchy. Review: Principle of Locality

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

CS152 Computer Architecture and Engineering Lecture 18: Virtual Memory

Administrivia. CMSC 411 Computer Systems Architecture Lecture 8 Basic Pipelining, cont., & Memory Hierarchy. SPEC92 benchmarks

CPS 104 Computer Organization and Programming Lecture 20: Virtual Memory

Chapter 5 Large and Fast: Exploiting Memory Hierarchy (Part 1)

Recap: Set Associative Cache. N-way set associative: N entries for each Cache Index N direct mapped caches operates in parallel

LECTURE 10: Improving Memory Access: Direct and Spatial caches

CS 61C: Great Ideas in Computer Architecture. Direct Mapped Caches

Textbook: Burdea and Coiffet, Virtual Reality Technology, 2 nd Edition, Wiley, Textbook web site:

CS61C Review of Cache/VM/TLB. Lecture 26. April 30, 1999 Dave Patterson (http.cs.berkeley.edu/~patterson)

Virtual Memory. Study Chapters Recap: Memory Hierarchy of a Modern Computer System

CSE 141 Computer Architecture Spring Lectures 17 Virtual Memory. Announcements Office Hour

Pipelining, Instruction Level Parallelism and Memory in Processors. Advanced Topics ICOM 4215 Computer Architecture and Organization Fall 2010

CS 152 Computer Architecture and Engineering. Lecture 7 - Memory Hierarchy-II

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 2

The Memory Hierarchy & Cache Review of Memory Hierarchy & Cache Basics (from 350):

Agenda. EE 260: Introduction to Digital Design Memory. Naive Register File. Agenda. Memory Arrays: SRAM. Memory Arrays: Register File

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Topics. Digital Systems Architecture EECE EECE Need More Cache?

Memory Hierarchy, Fully Associative Caches. Instructor: Nick Riasanovsky

Memory Hierarchy Technology. The Big Picture: Where are We Now? The Five Classic Components of a Computer

TDT Coarse-Grained Multithreading. Review on ILP. Multi-threaded execution. Contents. Fine-Grained Multithreading

CS 152 Computer Architecture and Engineering. Lecture 11 - Virtual Memory and Caches

EITF20: Computer Architecture Part 5.1.1: Virtual Memory

10/19/17. You Are Here! Review: Direct-Mapped Cache. Typical Memory Hierarchy

Memory Hierarchy: Motivation

Virtual Memory. Virtual Memory

Question?! Processor comparison!

Lecture 12. Memory Design & Caches, part 2. Christos Kozyrakis Stanford University

Lecture 17 Introduction to Memory Hierarchies" Why it s important " Fundamental lesson(s)" Suggested reading:" (HP Chapter

ECE ECE4680

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 3

Memory Hierarchy. Slides contents from:

CENG 3420 Computer Organization and Design. Lecture 08: Memory - I. Bei Yu

ECE7995 (6) Improving Cache Performance. [Adapted from Mary Jane Irwin s slides (PSU)]

Computer Organization and Structure. Bing-Yu Chen National Taiwan University

Let!s go back to a course goal... Let!s go back to a course goal... Question? Lecture 22 Introduction to Memory Hierarchies

Lecture 11 Cache. Peng Liu.

Donn Morrison Department of Computer Science. TDT4255 Memory hierarchies

Cache Memory COE 403. Computer Architecture Prof. Muhamed Mudawar. Computer Engineering Department King Fahd University of Petroleum and Minerals

Reducing Hit Times. Critical Influence on cycle-time or CPI. small is always faster and can be put on chip

Topics. Computer Organization CS Improving Performance. Opportunity for (Easy) Points. Three Generic Data Hazards

Transcription:

EECS 252 Graduate Computer Architecture Lecture 3 0 (continued) Review of Caches and Virtual January 27 th, 20 John Kubiatowicz Electrical Engineering and Computer Sciences University of California, Berkeley http//www.eecs.berkeley.edu/~kubitron/cs252 Review Control and Pipelining Control VIA State Machines and Microprogramming Just overlap tasks; easy if tasks are independent Speed Up Pipeline Depth; if ideal CPI is 1, then Pipeline depth Cycle Time Speedup 1 Pipeline stall CPI Cycle Time unpipelined pipelined Hazards limit performance on computers Structural need more HW resources Data (RAW,WAR,WAW) need forwarding, compiler scheduling Control delayed branch, prediction Exceptions, Interrupts add complexity 1/27/20 CS252-S, Lecture 03 2 Hierarchy Review 1/27/20 CS252-S, Lecture 03 3 Since 1980, CPU has outpaced DRAM... Performance (1/latency) 00 0 1980 1990 Year CPU 60% per yr 2X in 1.5 yrs 1/27/20 CS252-S, Lecture 03 4 CPU Gap grew 50% per year DRAM 9% per yr DRAM 2X in yrs 2000 How do architects address this gap? Put small, fast cache memories between CPU and DRAM. Create a memory hierarchy

1977 DRAM faster than microprocessors Apple ][ (1977) CPU 00 ns DRAM 400 ns Hierarchy Take advantage of the principle of locality to Present as much memory as in the cheapest technology Provide access at speed offered by the fastest technology Processor Steve Jobs Steve Wozniak 1/27/20 CS252-S, Lecture 03 5 Datapath Control Registers On-Chip Cache Second Level Cache (SRAM) Main (DRAM/ FLASH/ PCM) Speed (ns) 1s s-0s 0s Size (bytes) 0s Ks-Ms Ms Secondary Storage (Disk/ FLASH/ PCM),000,000s (s ms) 1/27/20 CS252-S, Lecture 03 6 Gs Tertiary Storage (Tape/ Cloud Storage),000,000,000s (s sec) Ts The Principle of Locality The Principle of Locality Program access a relatively small portion of the address space at any instant of time. Two Different Types of Locality Temporal Locality (Locality in Time) If an item is referenced, it will tend to be referenced again soon (e.g., loops, reuse) Spatial Locality (Locality in Space) If an item is referenced, items whose addresses are close by tend to be referenced soon (e.g., straightline code, array access) Last 15 years, HW relied on locality for speed 1/27/20 CS252-S, Lecture 03 7 Address (one dot per access) Programs with locality cache well... Bad locality behavior Spatial Locality Temporal Locality Time Donald J. Hatfield, Jeanette Gerald Program Restructuring for Virtual. IBM Systems Journal 1/27/20 CS252-S, (3) Lecture 168-192 03 (1971) 8

Hierarchy Apple imac G5 Managed by compiler Managed by hardware Managed by OS, hardware, application 07 Reg L1 Inst L1 Data L2 DRAM Disk Size 1K 64K 32K 512K 256M 80G Latency 1, 3, 3, 11, 88, 7, Cycles, Time 0.6 ns 1.9 ns 1.9 ns 6.9 ns 55 ns 12 ms Goal Illusion of large, fast, cheap memory Let programs address a memory space that scales to the disk size, at a speed that is usually as fast as register access imac G5 1.6 GHz 1/27/20 CS252-S, Lecture 03 9 Registers (1K) imac s PowerPC 970 All caches on-chip L1 (64K Instruction) L1 (32K Data) 512K L2 1/27/20 CS252-S, Lecture 03 Hierarchy Terminology Hit data appears in some block in the upper level (example Block X) Hit Rate the fraction of memory access found in the upper level Hit Time Time to access the upper level which consists of RAM access time + Time to determine hit/miss Miss data needs to be retrieve from a block in the lower level (Block Y) Miss Rate 1 - (Hit Rate) Miss Penalty Time to replace a block in the upper level + Time to deliver the block the processor Hit Time << Miss Penalty (500 instructions on 21264!) To Processor From Processor Upper Level Blk X Lower Level 1/27/20 CS252-S, Lecture 03 11 Blk Y 4 Questions for Hierarchy Q1 Where can a block be placed in the upper level? (Block placement) Q2 How is a block found if it is in the upper level? (Block identification) Q3 Which block should be replaced on a miss? (Block replacement) Q4 What happens on a write? (Write strategy) 1/27/20 CS252-S, Lecture 03 12

Q1 Where can a block be placed in the upper level? Block 12 placed in 8 block cache Fully associative, direct mapped, 2-way set associative S.A. Mapping Block Number Modulo Number Sets Cache Full Mapped Direct Mapped (12 mod 8) 4 2-Way Assoc (12 mod 4) 0 01234567 01234567 01234567 1111111111222222222233 01234567890123456789012345678901 1/27/20 CS252-S, Lecture 03 13 Sources of Cache Misses Compulsory (cold start or process migration, first reference) first access to a block Cold fact of life not a whole lot you can do about it Note If you are going to run billions of instruction, Compulsory Misses are insignificant Capacity Cache cannot contain all blocks access by the program Solution increase cache size Conflict (collision) Multiple memory locations mapped to the same cache location Solution 1 increase cache size Solution 2 increase associativity Coherence (Invalidation) other process (e.g., I/O) updates memory 1/27/20 CS252-S, Lecture 03 14 Q2 How is a block found if it is in the upper level? Tag Block Address Block offset Data Select Index Used to Lookup Candidates in Cache Index identifies the set Tag used to identify actual copy If no candidates match, then declare cache miss Block is minimum quantum of caching Data select field used to select data within block Many caching applications don t have data select field 1/27/20 CS252-S, Lecture 03 15 Index Set Select Block Size and Spatial Locality Split CPU address Block is unit of transfer between the cache and memory Tag Word0 Word1 Word2 block address Word3 Larger block size has distinct hardware advantages less tag overhead exploit fast burst transfers from DRAM exploit fast burst transfers over wide busses 4 word block, b2 offset b 32-b bits b bits 2 b block size a.k.a line size (in bytes) What are the disadvantages of increasing block size? Fewer blocks > more conflicts. Can waste bandwidth. 1/27/20 CS252-S, Lecture 03 16

Review Direct Mapped Cache Direct Mapped 2 N byte cache The uppermost (32 - N) bits are always the The lowest M bits are the Byte Select (Block Size 2 M ) Example 1 KB Direct Mapped Cache with 32 B Blocks Index chooses potential block Tag checked to verify block Byte select chooses byte within block 31 9 Cache Index Ex 0x50 Ex 0x01 Valid Bit 0x50 Cache Data Byte 31 Byte 63 Byte 1 Byte 33 Byte 0 Byte 32 0 1 2 3 Byte 23 1/27/20 CS252-S, Lecture 03 17 4 Byte Select Ex 0x00 Byte 992 0 31 Valid Review Set Associative Cache N-way set associative N entries per Cache Index N direct mapped caches operates in parallel Example Two-way set associative cache Cache Index selects a set from the cache Two tags in the set are compared to input in parallel Data is selected based on the tag result 31 8 Cache Index Cache Data Cache Block 0 Compare 1 Sel1 OR Mux 0 Sel0 Cache Data Cache Block 0 4 Byte Select 1/27/20 Hit CS252-S, Lecture Cache 03 Block 18 Compare 0 Valid Review Fully Associative Cache Fully Associative Every block can hold any line Address does not include a cache index Compare s of all Cache Entries in Parallel Example Block Size32B blocks We need N 27-bit comparators Still have byte select to choose from within block 31 (27 bits long) Valid Bit Cache Data Byte 31 Byte 1 Byte 0 Byte 63 Byte 33 Byte 32 1/27/20 CS252-S, Lecture 03 19 4 Byte Select Ex 0x01 0 Q3 Which block should be replaced on a miss? Easy for Direct Mapped Set Associative or Fully Associative LRU (Least Recently Used) Appealing, but hard to implement for high associativity Random Easy, but how well does it work? Assoc Size 16K 64K 256K LRU 5.2% 1.9% 1.15% 2-way Ran 5.7% 2.0% 1.17% LRU 4.7% 1.5% 1.13% 4-way Ran 5.3% 1.7% 1.13% LRU 4.4% 1.4% 1.12% 8-way Ran 5.0% 1.5% 1.12% 1/27/20 CS252-S, Lecture 03 20

Q4 What happens on a write? Policy Write-Through Data written to cache block also written to lowerlevel memory Write-Back Write data only to the cache Update lower level when a block falls out of the cache Debug Easy Hard Do read misses produce writes? No Yes Do repeated writes make it to lower level? Yes No Additional option -- let writes to an un-cached address allocate a new cache line ( write-allocate ). 1/27/20 CS252-S, Lecture 03 21 Write Buffers for Write-Through Caches Processor Q. Why a write buffer? Cache Write Buffer Lower Level Holds data awaiting write-through to lower level memory Q. Why a buffer, why not just one register? Q. Are Read After Write (RAW) hazards an issue for write buffer? A. So CPU doesn t stall A. Bursts of writes are common. A. Yes! Drain buffer before next read, or check write buffers for match on reads 1/27/20 CS252-S, Lecture 03 22 5 Basic Cache Optimizations Reducing Miss Rate 1. Larger Block size (compulsory misses) 2. Larger Cache size (capacity misses) 3. Higher Associativity (conflict misses) Reducing Miss Penalty 4. Multilevel Caches Administrivia Paper readings important for your graduate career Remember everything on web site HTTP//www.cs.berkeley.edu/~kubitron/cs252 WebSite signup Make sure to signup for the class if you haven t yet Don t forget the ISCA retrospective Reducing hit time 5. Giving Reads Priority over Writes E.g., Read complete before earlier writes in write buffer 1/27/20 CS252-S, Lecture 03 23 1/27/20 CS252-S, Lecture 03 24

RISC The integrated systems view (Discussion of Papers) The Case for the Reduced Instruction Set Computer Dave Patterson and David Ditzel Comments on The Case for the Reduced Instruction Set Computer Doug Clark and William Strecker "Retrospective on High-Level Computer Architecture" David Ditzel and David Patterson In-class discussion of these papers 1/27/20 CS252-S, Lecture 03 25 Virtual Address Space What is virtual memory? Physical Address Space Virtual Address V page no. Page Table Base Reg index into page table Page Table V Access Rights offset Virtual memory > treat memory as a cache for the disk Terminology blocks in this cache are called Pages Typical size of a page 1K 8K Page table maps virtual page numbers to physical frames PTE Page Table Entry 1/27/20 CS252-S, Lecture 03 26 PA table located in physical memory P page no. offset Physical Address What is in a Page Table Entry (PTE)? What is in a Page Table Entry (or PTE)? Pointer to next-level page table or to actual page Permission bits valid, read-only, read-write, write-only Example Intel x86 architecture PTE Address same format previous slide (,, 12-bit offset) Intermediate page tables called Directories Page Frame Number (Physical Page Number) 31-12 Free (OS) 11-9 0 L D A PWT PCD P Present (same as valid bit in other architectures) W Writeable U User accessible PWT Page write transparent external cache write-through PCD Page cache disabled (page cannot be cached) A Accessed page has been accessed recently D Dirty (PTE only) page has been modified recently L L1 4MB page (directory only). Bottom 22 bits of virtual address serve as offset 8 7 6 5 4 3 U W P 2 1 0 Three Advantages of Virtual Translation Program can be given consistent view of memory, even though physical memory is scrambled Makes multithreading reasonable (now used a lot!) Only the most important part of program ( Working Set ) must be in physical memory. Contiguous structures (like stacks) use only as much physical memory as necessary yet still grow later. Protection Different threads (or processes) protected from each other. Different pages can be given special behavior» (Read Only, Invisible to user programs, etc). Kernel data protected from User programs Very important for protection from malicious programs Sharing Can map same physical page to multiple users ( Shared memory ) 1/27/20 CS252-S, Lecture 03 27 1/27/20 CS252-S, Lecture 03 28

Large Address Space Support Virtual Address PageTablePtr bits bits 12 bits Virtual Virtual P1 index P2 index Offset Physical Address Physical Page # Offset 4KB Translation Look-Aside Buffers Translation Look-Aside Buffers (TLB) Cache on translations Fully Associative, Set Associative, or Direct Mapped hit VA PA miss CPU TLB Cache Main 4 bytes Single-Level Page Table Large 4KB pages for a 32-bit address 1M entries Each process needs own page table! Multi-Level Page Table Can allow sparseness of page table Portions of table can be swapped to disk 4 bytes 1/27/20 CS252-S, Lecture 03 29 Translation with a TLB miss Translation TLBs are Small typically not more than 128 256 entries Fully Associative 1/27/20 CS252-S, Lecture 03 30 hit data Caching Applied to Address Translation CPU Virtual Address TLB Cached? Yes No Translate (MMU) Save Result Data Read or Write Physical Address Physical (untranslated) Question is one of page locality does it exist? Instruction accesses spend a lot of time on the same page (since accesses sequential) Stack accesses have definite locality of reference Data accesses have less page locality, but still some Can we have a TLB hierarchy? Sure multiple levels at different sizes/speeds 1/27/20 CS252-S, Lecture 03 31 What Actually Happens on a TLB Miss? Hardware traversed page tables On TLB miss, hardware in MMU looks at current page table to fill TLB (may walk multiple levels)» If PTE valid, hardware fills TLB and processor never knows» If PTE marked as invalid, causes Page Fault, after which kernel decides what to do afterwards Software traversed Page tables (like MIPS) On TLB miss, processor receives TLB fault Kernel traverses page table to find PTE» If PTE valid, fills TLB and returns from fault» If PTE marked as invalid, internally calls Page Fault handler Most chip sets provide hardware traversal Modern operating systems tend to have more TLB faults since they use translation for many things Examples» shared segments» user-level portions of an operating system 1/27/20 CS252-S, Lecture 03 32

Clock Algorithm Not Recently Used Set of all pages in Clock Algorithm Approximate LRU (approx to approx to MIN) Replace an old page, not the oldest page Details Hardware use bit per physical page» Hardware sets use bit on each reference Single Clock Hand Advances only on page fault! Check for pages not used recently Mark pages as not used recently Page Table dirty used 1 0... 1 0 0 1 1 1 0 0» If use bit isn t set, means not referenced in a long time On page fault» Advance clock hand (not real time)» Check use bit 1 used recently; clear and leave alone 0 selected candidate for replacement 1/27/20 CS252-S, Lecture 03 33 Example R3000 pipeline MIPS R3000 Pipeline Inst Fetch Dcd/ Reg ALU / E.A Write Reg TLB I-Cache RF Operation WB Virtual Address Space ASID V. Page Number Offset 6 20 12 E.A. TLB 0xx User segment (caching based on PT/TLB entry) 0 Kernel physical space, cached 1 Kernel physical space, uncached 11x Kernel virtual space Allows context switching among 64 user processes without TLB flush D-Cache TLB 64 entry, on-chip, fully associative, software TLB fault handler 1/27/20 CS252-S, Lecture 03 34 Reducing translation time further As described, TLB lookup is in serial with cache lookup Virtual Address V page no. offset Overlapping TLB & Cache Access Here is how this might work with a 4K cache 32 TLB assoc lookup index 4K Cache 1 K TLB Lookup V Access Rights PA P page no. offset Physical Address Machines with TLBs go one step further they overlap TLB lookup with cache access. Works because offset available early 1/27/20 CS252-S, Lecture 03 35 Hit/ Miss FN 20 page # 2 disp 00 What if cache size is increased to 8KB? Overlap not complete Need to do something else. See CS152/252 Another option Virtual Caches Tags in cache are virtual addresses 4 bytes FN Data Hit/ Miss Translation only happens on cache misses 1/27/20 CS252-S, Lecture 03 36

Problems With Overlapped TLB Access Overlapped access requires address bits used to index into cache do not change as result translation This usually limits things to small caches, large page sizes, or high n-way set associative caches if you want a large cache Example suppose everything the same except that the cache is increased to 8 K bytes instead of 4 K 11 2 cache index 00 20 12 virt page # disp Solutions go to 8K byte page sizes; go to 2 way set associative cache; or SW guarantee VA[13]PA[13] This bit is changed by VA translation, but is needed for cache lookup 1K 2 way set assoc cache 4 4 1/27/20 CS252-S, Lecture 03 37 Summary #1/3 The Cache Design Space Several interacting dimensions cache size block size associativity replacement policy write-through vs write-back write allocation The optimal choice is a compromise depends on access characteristics» workload Bad» use (I-cache, D-cache, TLB) depends on technology / cost Good Simplicity often wins Cache Size Factor A Associativity Block Size 1/27/20 CS252-S, Lecture 03 38 Less Factor B More Summary #2/3 Caches The Principle of Locality Program access a relatively small portion of the address space at any instant of time.» Temporal Locality Locality in Time» Spatial Locality Locality in Space Three Major Categories of Cache Misses Compulsory Misses sad facts of life. Example cold start misses. Capacity Misses increase cache size Conflict Misses increase cache size and/or associativity. Nightmare Scenario ping pong effect! Write Policy Write Through vs. Write Back Today CPU time is a function of (ops, cache misses) vs. just f(ops) affects Compilers, Data structures, and Algorithms Summary #3/3 TLB, Virtual Page tables map virtual address to physical address TLBs are important for fast translation TLB misses are significant in processor performance funny times, as most systems can t access all of 2nd level cache without TLB misses! Caches, TLBs, Virtual all understood by examining how they deal with 4 questions 1) Where can block be placed? 2) How is block found? 3) What block is replaced on miss? 4) How are writes handled? Today VM allows many processes to share single memory without having to swap all processes to disk; today VM protection is more important than memory hierarchy benefits, but computers insecure Prepare for debate + quiz on Wednesday 1/27/20 CS252-S, Lecture 03 39 1/27/20 CS252-S, Lecture 03 40